[clang] [llvm] [AsmParser] Revamp how floating-point literals in LLVM IR. (PR #121838)
Joshua Cranmer via llvm-commits
llvm-commits at lists.llvm.org
Mon Jan 6 13:23:31 PST 2025
https://github.com/jcranmer-intel created https://github.com/llvm/llvm-project/pull/121838
This adds support for the following kinds of formats:
* Hexadecimal literals like 0x1.fp13
* Special values +inf/-inf, +qnan/-qnan
* NaN values with payloads like +nan(0x1)
Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.
>From e4869d363443fb40ab10822d34a87e5dfa967c7e Mon Sep 17 00:00:00 2001
From: Joshua Cranmer <joshua.cranmer at intel.com>
Date: Mon, 6 Jan 2025 13:14:33 -0800
Subject: [PATCH 1/3] [AsmParser] Revamp how floating-point literals work in
LLVM IR.
This adds support for the following kinds of formats:
* Hexadecimal literals like 0x1.fp13
* Special values +inf/-inf, +qnan/-qnan
* NaN values with payloads like +nan(0x1)
Additionally, the floating-point hexadecimal format that records the
bitpattern exactly no longer requires the 0xL or 0xK or similar code for
the floating-point type. This format is removed from the documentation,
but is still supported as a legacy format in the parser.
---
llvm/docs/LangRef.rst | 67 +++----
llvm/include/llvm/AsmParser/LLLexer.h | 1 +
llvm/include/llvm/AsmParser/LLToken.h | 2 +
llvm/lib/AsmParser/LLLexer.cpp | 196 +++++++++++++++++----
llvm/lib/AsmParser/LLParser.cpp | 34 +++-
llvm/lib/CodeGen/MIRParser/MILexer.cpp | 18 ++
llvm/lib/Support/APFloat.cpp | 2 +-
llvm/test/Assembler/float-literals.ll | 40 +++++
llvm/unittests/AsmParser/AsmParserTest.cpp | 47 +++++
9 files changed, 337 insertions(+), 70 deletions(-)
create mode 100644 llvm/test/Assembler/float-literals.ll
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 7e01331b20c570..72020e69fb17f5 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -4456,11 +4456,13 @@ Simple Constants
zeros. So '``s0x0001``' of type '``i16``' will be -1, not 1.
**Floating-point constants**
Floating-point constants use standard decimal notation (e.g.
- 123.421), exponential notation (e.g. 1.23421e+2), or a more precise
- hexadecimal notation (see below). The assembler requires the exact
- decimal value of a floating-point constant. For example, the
- assembler accepts 1.25 but rejects 1.3 because 1.3 is a repeating
- decimal in binary. Floating-point constants must have a
+ 123.421), exponential notation (e.g. 1.23421e+2), standard hexadecimal
+ notation (e.g., 0x1.3effp-43), one of several special values, or a
+ precise bitstring for the underlying value. When converting decimal and
+ hexadecimal literals to the floating-point type, the value is converted
+ using the default rounding mode (round to nearest, half to even). String
+ conversions that underflow to 0 or overflow to infinity are not permitted.
+ Floating-point constants must have a
:ref:`floating-point <t_floating>` type.
**Null pointer constants**
The identifier '``null``' is recognized as a null pointer constant
@@ -4469,31 +4471,36 @@ Simple Constants
The identifier '``none``' is recognized as an empty token constant
and must be of :ref:`token type <t_token>`.
-The one non-intuitive notation for constants is the hexadecimal form of
-floating-point constants. For example, the form
-'``double 0x432ff973cafa8000``' is equivalent to (but harder to read
-than) '``double 4.5e+15``'. The only time hexadecimal floating-point
-constants are required (and the only time that they are generated by the
-disassembler) is when a floating-point constant must be emitted but it
-cannot be represented as a decimal floating-point number in a reasonable
-number of digits. For example, NaN's, infinities, and other special
-values are represented in their IEEE hexadecimal format so that assembly
-and disassembly do not cause any bits to change in the constants.
-
-When using the hexadecimal form, constants of types bfloat, half, float, and
-double are represented using the 16-digit form shown above (which matches the
-IEEE754 representation for double); bfloat, half and float values must, however,
-be exactly representable as bfloat, IEEE 754 half, and IEEE 754 single
-precision respectively. Hexadecimal format is always used for long double, and
-there are three forms of long double. The 80-bit format used by x86 is
-represented as ``0xK`` followed by 20 hexadecimal digits. The 128-bit format
-used by PowerPC (two adjacent doubles) is represented by ``0xM`` followed by 32
-hexadecimal digits. The IEEE 128-bit format is represented by ``0xL`` followed
-by 32 hexadecimal digits. Long doubles will only work if they match the long
-double format on your target. The IEEE 16-bit format (half precision) is
-represented by ``0xH`` followed by 4 hexadecimal digits. The bfloat 16-bit
-format is represented by ``0xR`` followed by 4 hexadecimal digits. All
-hexadecimal formats are big-endian (sign bit at the left).
+Floating-point constants support the following kinds of strings:
+
+ +---------------+---------------------------------------------------+
+ | Syntax | Description |
+ +===============+===================================================+
+ | ``+4.5e-13`` | Common decimal literal. Signs are optional, as is |
+ | | the exponent portion. The decimal point is |
+ | | required, as is one or more leading digits before |
+ | | the decimal point. |
+ +---------------+---------------------------------------------------+
+ | ``-0x1.fp13`` | Common hexadecimal literal. Signs are optional. |
+ | | The decimal point is required, as is the exponent |
+ | | portion of the literal (after the ``p``). |
+ +---------------+---------------------------------------------------+
+ | ``+inf``, | Positive or negative infinity. The sign is |
+ | ``-inf`` | required. |
+ +---------------+---------------------------------------------------+
+ | ``+qnan``, | Positive or negative preferred quiet NaN, i.e., |
+ | ``-qnan`` | the quiet bit is set, and all other payload bits |
+ | | are 0. The sign is required. |
+ +---------------+---------------------------------------------------+
+ | ``+nan(0x1)`` | NaN value with a particular payload, specified as |
+ | | hexadecimal (including the quiet bit as part of |
+ | | the payload). The sign is required. |
+ +---------------+---------------------------------------------------+
+ | ``f0x3c00`` | Value of the floating-point number if bitcast to |
+ | | an integer. The number must have exactly as many |
+ | | hexadecimal digits as is necessary for the size |
+ | | of the floating-point number. |
+ +---------------+---------------------------------------------------+
There are no constants of type x86_amx.
diff --git a/llvm/include/llvm/AsmParser/LLLexer.h b/llvm/include/llvm/AsmParser/LLLexer.h
index 501a7aefccd7f9..ae6c73b9ae75f5 100644
--- a/llvm/include/llvm/AsmParser/LLLexer.h
+++ b/llvm/include/llvm/AsmParser/LLLexer.h
@@ -112,6 +112,7 @@ namespace llvm {
lltok::Kind Lex0x();
lltok::Kind LexHash();
lltok::Kind LexCaret();
+ lltok::Kind LexFloatStr();
uint64_t atoull(const char *Buffer, const char *End);
uint64_t HexIntToVal(const char *Buffer, const char *End);
diff --git a/llvm/include/llvm/AsmParser/LLToken.h b/llvm/include/llvm/AsmParser/LLToken.h
index 178c911120b4ce..59062c73113fae 100644
--- a/llvm/include/llvm/AsmParser/LLToken.h
+++ b/llvm/include/llvm/AsmParser/LLToken.h
@@ -491,10 +491,12 @@ enum Kind {
DwarfMacinfo, // DW_MACINFO_foo
ChecksumKind, // CSK_foo
DbgRecordType, // dbg_foo
+ FloatLiteral, // Unparsed float literal
// Type valued tokens (TyVal).
Type,
+ FloatHexLiteral, // f0x..., stored as APSInt
APFloat, // APFloatVal
APSInt // APSInt
};
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index 1b8e033134f51b..cf478a8a175b8c 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -486,10 +486,11 @@ lltok::Kind LLLexer::LexHash() {
}
/// Lex a label, integer type, keyword, or hexadecimal integer constant.
-/// Label [-a-zA-Z$._0-9]+:
-/// IntegerType i[0-9]+
-/// Keyword sdiv, float, ...
-/// HexIntConstant [us]0x[0-9A-Fa-f]+
+/// Label [-a-zA-Z$._0-9]+:
+/// IntegerType i[0-9]+
+/// Keyword sdiv, float, ...
+/// HexIntConstant [us]0x[0-9A-Fa-f]+
+/// HexFloatConstant f0x[0-9A-Fa-f]+
lltok::Kind LLLexer::LexIdentifier() {
const char *StartChar = CurPtr;
const char *IntEnd = CurPtr[-1] == 'i' ? nullptr : StartChar;
@@ -1017,11 +1018,13 @@ lltok::Kind LLLexer::LexIdentifier() {
}
// Check for [us]0x[0-9A-Fa-f]+ which are Hexadecimal constant generated by
- // the CFE to avoid forcing it to deal with 64-bit numbers.
- if ((TokStart[0] == 'u' || TokStart[0] == 's') &&
+ // the CFE to avoid forcing it to deal with 64-bit numbers. Also check for
+ // f0x[0-9A-Fa-f]+, which is the floating-point hexadecimal literal constant.
+ if ((TokStart[0] == 'u' || TokStart[0] == 's' || TokStart[0] == 'f') &&
TokStart[1] == '0' && TokStart[2] == 'x' &&
isxdigit(static_cast<unsigned char>(TokStart[3]))) {
- int len = CurPtr-TokStart-3;
+ bool IsFloatConst = TokStart[0] == 'f';
+ int len = CurPtr - TokStart - 3;
uint32_t bits = len * 4;
StringRef HexStr(TokStart + 3, len);
if (!all_of(HexStr, isxdigit)) {
@@ -1031,10 +1034,10 @@ lltok::Kind LLLexer::LexIdentifier() {
}
APInt Tmp(bits, HexStr, 16);
uint32_t activeBits = Tmp.getActiveBits();
- if (activeBits > 0 && activeBits < bits)
+ if (!IsFloatConst && activeBits > 0 && activeBits < bits)
Tmp = Tmp.trunc(activeBits);
- APSIntVal = APSInt(Tmp, TokStart[0] == 'u');
- return lltok::APSInt;
+ APSIntVal = APSInt(Tmp, TokStart[0] != 's');
+ return IsFloatConst ? lltok::FloatHexLiteral : lltok::APSInt;
}
// If this is "cc1234", return this as just "cc".
@@ -1050,6 +1053,7 @@ lltok::Kind LLLexer::LexIdentifier() {
/// Lex all tokens that start with a 0x prefix, knowing they match and are not
/// labels.
+/// HexFPLiteral [-+]?0x[0-9A-Fa-f]+.[0-9A-Fa-f]*[pP][-+]?[0-9]+
/// HexFPConstant 0x[0-9A-Fa-f]+
/// HexFP80Constant 0xK[0-9A-Fa-f]+
/// HexFP128Constant 0xL[0-9A-Fa-f]+
@@ -1076,6 +1080,11 @@ lltok::Kind LLLexer::Lex0x() {
while (isxdigit(static_cast<unsigned char>(CurPtr[0])))
++CurPtr;
+ if (*CurPtr == '.') {
+ // HexFPLiteral, following C's %a syntax
+ return LexFloatStr();
+ }
+
if (Kind == 'J') {
// HexFPConstant - Floating point constant represented in IEEE format as a
// hexadecimal number for when exponential notation is not precise enough.
@@ -1090,28 +1099,26 @@ lltok::Kind LLLexer::Lex0x() {
default: llvm_unreachable("Unknown kind!");
case 'K':
// F80HexFPConstant - x87 long double in hexadecimal format (10 bytes)
- FP80HexToIntPair(TokStart+3, CurPtr, Pair);
- APFloatVal = APFloat(APFloat::x87DoubleExtended(), APInt(80, Pair));
- return lltok::APFloat;
+ FP80HexToIntPair(TokStart + 3, CurPtr, Pair);
+ APSIntVal = APInt(80, Pair);
+ return lltok::FloatHexLiteral;
case 'L':
// F128HexFPConstant - IEEE 128-bit in hexadecimal format (16 bytes)
- HexToIntPair(TokStart+3, CurPtr, Pair);
- APFloatVal = APFloat(APFloat::IEEEquad(), APInt(128, Pair));
- return lltok::APFloat;
+ HexToIntPair(TokStart + 3, CurPtr, Pair);
+ APSIntVal = APInt(128, Pair);
+ return lltok::FloatHexLiteral;
case 'M':
// PPC128HexFPConstant - PowerPC 128-bit in hexadecimal format (16 bytes)
- HexToIntPair(TokStart+3, CurPtr, Pair);
- APFloatVal = APFloat(APFloat::PPCDoubleDouble(), APInt(128, Pair));
- return lltok::APFloat;
+ HexToIntPair(TokStart + 3, CurPtr, Pair);
+ APSIntVal = APInt(128, Pair);
+ return lltok::FloatHexLiteral;
case 'H':
- APFloatVal = APFloat(APFloat::IEEEhalf(),
- APInt(16,HexIntToVal(TokStart+3, CurPtr)));
- return lltok::APFloat;
+ APSIntVal = APInt(16, HexIntToVal(TokStart + 3, CurPtr));
+ return lltok::FloatHexLiteral;
case 'R':
// Brain floating point
- APFloatVal = APFloat(APFloat::BFloat(),
- APInt(16, HexIntToVal(TokStart + 3, CurPtr)));
- return lltok::APFloat;
+ APSIntVal = APInt(16, HexIntToVal(TokStart + 3, CurPtr));
+ return lltok::FloatHexLiteral;
}
}
@@ -1120,6 +1127,7 @@ lltok::Kind LLLexer::Lex0x() {
/// NInteger -[0-9]+
/// FPConstant [-+]?[0-9]+[.][0-9]*([eE][-+]?[0-9]+)?
/// PInteger [0-9]+
+/// HexFPLiteral [-+]?0x[0-9A-Fa-f]+.[0-9A-Fa-f]*[pP][-+]?[0-9]+
/// HexFPConstant 0x[0-9A-Fa-f]+
/// HexFP80Constant 0xK[0-9A-Fa-f]+
/// HexFP128Constant 0xL[0-9A-Fa-f]+
@@ -1135,7 +1143,9 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
return lltok::LabelStr;
}
- return lltok::Error;
+ // It might be a -inf, -nan, etc. Check if it's a float string (which will
+ // also handle error conditions there).
+ return LexFloatStr();
}
// At this point, it is either a label, int or fp constant.
@@ -1168,6 +1178,9 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
if (CurPtr[0] != '.') {
if (TokStart[0] == '0' && TokStart[1] == 'x')
return Lex0x();
+ if (TokStart[0] == '-' && TokStart[1] == '0' && TokStart[2] == 'x')
+ return LexFloatStr();
+
APSIntVal = APSInt(StringRef(TokStart, CurPtr - TokStart));
return lltok::APSInt;
}
@@ -1186,26 +1199,31 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
}
}
- APFloatVal = APFloat(APFloat::IEEEdouble(),
- StringRef(TokStart, CurPtr - TokStart));
- return lltok::APFloat;
+ StrVal.assign(TokStart, CurPtr - TokStart);
+ return lltok::FloatLiteral;
}
/// Lex a floating point constant starting with +.
-/// FPConstant [-+]?[0-9]+[.][0-9]*([eE][-+]?[0-9]+)?
+/// FPConstant [-+]?[0-9]+[.][0-9]*([eE][-+]?[0-9]+)?
+/// HexFPLiteral [-+]?0x[0-9A-Fa-f]+.[0-9A-Fa-f]*[pP][-+]?[0-9]+
+/// HexFPSpecial [-+](inf|qnan|nan\(0x[0-9A-Fa-f]+\))
lltok::Kind LLLexer::LexPositive() {
- // If the letter after the negative is a number, this is probably not a
- // label.
+ // If it's not numeric, check for special floating-point values.
if (!isdigit(static_cast<unsigned char>(CurPtr[0])))
- return lltok::Error;
+ return LexFloatStr();
// Skip digits.
for (++CurPtr; isdigit(static_cast<unsigned char>(CurPtr[0])); ++CurPtr)
/*empty*/;
+ // If the first non-digit is an x, check if it's a hex FP literal. LexFloatStr
+ // will reanalyze TokStr..CurPtr to make sure that it's 0x and not 413x.
+ if (CurPtr[0] == 'x')
+ return LexFloatStr();
+
// At this point, we need a '.'.
if (CurPtr[0] != '.') {
- CurPtr = TokStart+1;
+ CurPtr = TokStart + 1;
return lltok::Error;
}
@@ -1223,7 +1241,111 @@ lltok::Kind LLLexer::LexPositive() {
}
}
- APFloatVal = APFloat(APFloat::IEEEdouble(),
- StringRef(TokStart, CurPtr - TokStart));
- return lltok::APFloat;
+ StrVal.assign(TokStart, CurPtr - TokStart);
+ return lltok::FloatLiteral;
+}
+
+/// Lex all tokens that start with a + or - that could be a float literal.
+/// HexFPLiteral [-+]?0x[0-9A-Fa-f]+.[0-9A-Fa-f]*[pP][-+]?[0-9]+
+/// HexFPSpecial [-+](inf|qnan|nan\(0x[0-9A-Fa-f]+\))
+lltok::Kind LLLexer::LexFloatStr() {
+ // At the point we enter this function, we may have seen a few characters
+ // already, but how many differs based on the entry point. Rewind to the
+ // beginning just in case.
+ CurPtr = TokStart;
+
+ // Check for optional sign.
+ if (*CurPtr == '-' || *CurPtr == '+')
+ ++CurPtr;
+
+ if (*CurPtr != '0') {
+ // Check for keywords.
+ const char *LabelStart = CurPtr;
+ while (isLabelChar(*CurPtr))
+ ++CurPtr;
+ StringRef Label(LabelStart, CurPtr - LabelStart);
+
+ // Basic special values.
+ if (Label == "inf") {
+ // Copy from the beginning, to include the sign.
+ StrVal.assign(TokStart, CurPtr - TokStart);
+ return lltok::FloatLiteral;
+ }
+
+ // APFloat::convertFromString doesn't support qnan, so translate it to a
+ // nan payload string it does support.
+ if (Label == "qnan") {
+ StrVal = *TokStart == '-' ? "-nan(0)" : "nan(0)";
+ return lltok::FloatLiteral;
+ }
+
+ // NaN with payload.
+ if (Label == "nan" && *CurPtr == '(') {
+ const char *Payload = ++CurPtr;
+ while (*CurPtr && *CurPtr != ')')
+ ++CurPtr;
+
+ // If no close parenthesis, it's a bad token, return it as an error.
+ if (*CurPtr++ != ')') {
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+
+ StringRef PayloadStr(Payload, CurPtr - Payload);
+ APInt Val;
+ if (PayloadStr.consume_front("0x") && PayloadStr.getAsInteger(16, Val)) {
+ StrVal.assign(TokStart, CurPtr - TokStart);
+ // Drop the leading + from the string, as APFloat::convertFromString
+ // doesn't support leading + sign.
+ if (StrVal[0] == '+')
+ StrVal.erase(0, 1);
+ return lltok::FloatLiteral;
+ }
+ }
+
+ // Bad token, return it as an error.
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+ ++CurPtr;
+
+ if (*CurPtr++ != 'x') {
+ // Bad token, return it as an error.
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+
+ if (!isxdigit(static_cast<unsigned char>(CurPtr[0]))) {
+ // Bad token, return it as an error.
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+
+ while (isxdigit(static_cast<unsigned char>(CurPtr[0])))
+ ++CurPtr;
+
+ if (*CurPtr != '.') {
+ // Bad token, return it as an error.
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+
+ ++CurPtr; // Eat the .
+ while (isxdigit(static_cast<unsigned char>(CurPtr[0])))
+ ++CurPtr;
+
+ if (*CurPtr != 'p' && *CurPtr != 'P') {
+ // Bad token, return it as an error.
+ CurPtr = TokStart + 1;
+ return lltok::Error;
+ }
+
+ ++CurPtr;
+ if (*CurPtr == '+' || *CurPtr == '-')
+ ++CurPtr;
+ while (isdigit(static_cast<unsigned char>(CurPtr[0])))
+ ++CurPtr;
+
+ StrVal.assign(TokStart, CurPtr - TokStart);
+ return lltok::FloatLiteral;
}
diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp
index 52d48a69f0eb53..d2a2f907e64af0 100644
--- a/llvm/lib/AsmParser/LLParser.cpp
+++ b/llvm/lib/AsmParser/LLParser.cpp
@@ -3829,10 +3829,40 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
ID.APSIntVal = Lex.getAPSIntVal();
ID.Kind = ValID::t_APSInt;
break;
- case lltok::APFloat:
+ case lltok::APFloat: {
+ assert(ExpectedTy && "Need type to parse float values");
ID.APFloatVal = Lex.getAPFloatVal();
ID.Kind = ValID::t_APFloat;
break;
+ }
+ case lltok::FloatLiteral: {
+ assert(ExpectedTy && "Need type to parse float values");
+ if (!ExpectedTy->isFloatingPointTy())
+ return error(ID.Loc, "floating point constant invalid for type");
+ ID.APFloatVal = APFloat(ExpectedTy->getFltSemantics());
+ auto Except = ID.APFloatVal.convertFromString(
+ Lex.getStrVal(), RoundingMode::NearestTiesToEven);
+ assert(Except && "Invalid float strings should be caught by the lexer");
+ // Forbid overflowing and underflowing literals, but permit inexact
+ // literals. Underflow is thrown when the result is denormal, so to allow
+ // denormals, only reject underflowing literals that resulted in a zero.
+ if (*Except & APFloat::opOverflow)
+ return error(ID.Loc, "floating point constant overflowed type");
+ if ((*Except & APFloat::opUnderflow) && ID.APFloatVal.isZero())
+ return error(ID.Loc, "floating point constant underflowed type");
+ ID.Kind = ValID::t_APFloat;
+ break;
+ }
+ case lltok::FloatHexLiteral: {
+ assert(ExpectedTy && "Need type to parse float values");
+ auto &Semantics = ExpectedTy->getFltSemantics();
+ const APInt &Bits = Lex.getAPSIntVal();
+ if (APFloat::getSizeInBits(Semantics) != Bits.getBitWidth())
+ return error(ID.Loc, "float hex literal has incorrect number of bits");
+ ID.APFloatVal = APFloat(Semantics, Bits);
+ ID.Kind = ValID::t_APFloat;
+ break;
+ }
case lltok::kw_true:
ID.ConstantVal = ConstantInt::getTrue(Context);
ID.Kind = ValID::t_Constant;
@@ -6255,7 +6285,7 @@ bool LLParser::parseConstantValue(Type *Ty, Constant *&C) {
C = nullptr;
ValID ID;
auto Loc = Lex.getLoc();
- if (parseValID(ID, /*PFS=*/nullptr))
+ if (parseValID(ID, /*PFS=*/nullptr, /*ExpectedTy=*/Ty))
return true;
switch (ID.Kind) {
case ValID::t_APSInt:
diff --git a/llvm/lib/CodeGen/MIRParser/MILexer.cpp b/llvm/lib/CodeGen/MIRParser/MILexer.cpp
index 7153902fe2e7a6..c454fbe865c408 100644
--- a/llvm/lib/CodeGen/MIRParser/MILexer.cpp
+++ b/llvm/lib/CodeGen/MIRParser/MILexer.cpp
@@ -594,6 +594,22 @@ static Cursor maybeLexHexadecimalLiteral(Cursor C, MIToken &Token) {
return C;
}
+static Cursor maybeLexFloatHexBits(Cursor C, MIToken &Token) {
+ if (C.peek() != 'f')
+ return std::nullopt;
+ if (C.peek(1) != '0' || (C.peek(2) != 'x' && C.peek(2) != 'X'))
+ return std::nullopt;
+ Cursor Range = C;
+ C.advance(3);
+ while (isxdigit(C.peek()))
+ C.advance();
+ StringRef StrVal = Range.upto(C);
+ if (StrVal.size() <= 3)
+ return std::nullopt;
+ Token.reset(MIToken::FloatingPointLiteral, Range.upto(C));
+ return C;
+}
+
static Cursor maybeLexNumericalLiteral(Cursor C, MIToken &Token) {
if (!isdigit(C.peek()) && (C.peek() != '-' || !isdigit(C.peek(1))))
return std::nullopt;
@@ -730,6 +746,8 @@ StringRef llvm::lexMIToken(StringRef Source, MIToken &Token,
if (Cursor R = maybeLexMachineBasicBlock(C, Token, ErrorCallback))
return R.remaining();
+ if (Cursor R = maybeLexFloatHexBits(C, Token))
+ return R.remaining();
if (Cursor R = maybeLexIdentifier(C, Token))
return R.remaining();
if (Cursor R = maybeLexJumpTableIndex(C, Token))
diff --git a/llvm/lib/Support/APFloat.cpp b/llvm/lib/Support/APFloat.cpp
index c9adfca8b3b768..5c19b5d51f6997 100644
--- a/llvm/lib/Support/APFloat.cpp
+++ b/llvm/lib/Support/APFloat.cpp
@@ -3221,7 +3221,7 @@ bool IEEEFloat::convertFromStringSpecials(StringRef str) {
if (str.size() < MIN_NAME_SIZE)
return false;
- if (str == "inf" || str == "INFINITY" || str == "+Inf") {
+ if (str == "inf" || str == "INFINITY" || str == "+Inf" || str == "+inf") {
makeInf(false);
return true;
}
diff --git a/llvm/test/Assembler/float-literals.ll b/llvm/test/Assembler/float-literals.ll
new file mode 100644
index 00000000000000..f2f2f6ea7d688b
--- /dev/null
+++ b/llvm/test/Assembler/float-literals.ll
@@ -0,0 +1,40 @@
+; RUN: llvm-as < %s | llvm-dis | FileCheck %s
+
+; CHECK: @a = global float -0.000000e+00
+ at a = global float -0.0
+; CHECK: @b = global float 0.000000e+00
+ at b = global float +0.0
+; CHECK: @c = global float 0.000000e+00
+ at c = global float 0.0
+; CHECK: @d = global float 0.000000e+00
+ at d = global float 0.e1
+; CHECK: @e = global float 0.000000e+00
+ at e = global float 0.e-1
+; CHECK: @f = global float 0.000000e+00
+ at f = global float 0.e+1
+; CHECK: @g = global float 0x3DF0000000000000
+ at g = global float 0x1.0p-32
+; CHECK: @h = global float 0x41F0000000000000
+ at h = global float 0x1.0p+32
+; CHECK: @i = global float 0x41FC300000000000
+ at i = global float 0x1.c3p32
+; CHECK: @j = global float 0x3FFFF00000000000
+ at j = global float 0x1.ffp0
+; CHECK: @k = global float 0xC0FFFFFFE0000000
+ at k = global float -0xfff.fffp5
+; CHECK: @l = global float 0x4080FDE000000000
+ at l = global float +0x10.fdep5
+
+; CHECK: @0 = global double 0x7FF0000000000000
+ at 0 = global double +inf
+; CHECK: @1 = global ppc_fp128 f0x0000000000000000FFF0000000000000
+ at 1 = global ppc_fp128 -inf
+; CHECK: @2 = global half f0xFE00
+ at 2 = global half -qnan
+; CHECK: @3 = global bfloat f0x7FC0
+ at 3 = global bfloat +qnan
+; CHECK: @4 = global fp128 f0x7FFF80000000000000000000DEADBEEF
+ at 4 = global fp128 +nan(0xdeadbeef)
+; CHECK: @5 = global x86_fp80 f0x0001FFFF000000000000
+ at 5 = global x86_fp80 f0x0000ffff000000000000
+
diff --git a/llvm/unittests/AsmParser/AsmParserTest.cpp b/llvm/unittests/AsmParser/AsmParserTest.cpp
index ce226705068afb..e2f254dd940def 100644
--- a/llvm/unittests/AsmParser/AsmParserTest.cpp
+++ b/llvm/unittests/AsmParser/AsmParserTest.cpp
@@ -82,6 +82,44 @@ TEST(AsmParserTest, TypeAndConstantValueParsing) {
ASSERT_TRUE(isa<ConstantFP>(V));
EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(3.5));
+ V = parseConstantValue("double 0x13.5p-52", Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isDoubleTy());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(0x13.5p-52));
+
+ V = parseConstantValue("fp128 1.0e-4932", Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isFP128Ty());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->getValue().isDenormal());
+
+ V = parseConstantValue("fp128 1.1897314953572317650857593266280070162e4932",
+ Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isFP128Ty());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(
+ APFloat::getLargest(APFloat::IEEEquad())));
+
+ V = parseConstantValue("float f0xabcdef01", Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isFloatTy());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(-0x1.9bde02p-40));
+
+ V = parseConstantValue("fp128 f0x80000000000000000000000000000000", Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isFP128Ty());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(-0.0));
+
+ V = parseConstantValue("fp128 -inf", Error, M);
+ ASSERT_TRUE(V);
+ EXPECT_TRUE(V->getType()->isFP128Ty());
+ ASSERT_TRUE(isa<ConstantFP>(V));
+ EXPECT_TRUE(cast<ConstantFP>(V)->getValue().isNegInfinity());
+
V = parseConstantValue("i32 42", Error, M);
ASSERT_TRUE(V);
EXPECT_TRUE(V->getType()->isIntegerTy());
@@ -136,6 +174,15 @@ TEST(AsmParserTest, TypeAndConstantValueParsing) {
EXPECT_FALSE(parseConstantValue("i32 3, ", Error, M));
EXPECT_EQ(Error.getMessage(), "expected end of string");
+
+ EXPECT_FALSE(parseConstantValue("double 1.0e999999999", Error, M));
+ EXPECT_EQ(Error.getMessage(), "floating point constant overflowed type");
+
+ EXPECT_FALSE(parseConstantValue("double 1.0e-999999999", Error, M));
+ EXPECT_EQ(Error.getMessage(), "floating point constant underflowed type");
+
+ EXPECT_FALSE(parseConstantValue("double 0x.25p-5", Error, M));
+ EXPECT_EQ(Error.getMessage(), "expected value token");
}
TEST(AsmParserTest, TypeAndConstantValueWithSlotMappingParsing) {
>From ddd5bb0a68435599843e47fbd6aeb5595eb8dbb2 Mon Sep 17 00:00:00 2001
From: Joshua Cranmer <joshua.cranmer at intel.com>
Date: Mon, 6 Jan 2025 13:14:41 -0800
Subject: [PATCH 2/3] Real test failures uncovered by previous changes.
---
.../CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll | 4 ++--
llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll | 2 +-
.../CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll | 4 ++--
llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll | 4 ++--
llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll | 4 ++--
.../CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir | 4 ++--
6 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll b/llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll
index ce7ae1e426bdac..fdbac9350fd9ee 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll
@@ -42,7 +42,7 @@ define fp128 @test_v1f128(<1 x fp128> %a) nounwind {
; CHECK-LABEL: test_v1f128:
; CHECK: // %bb.0:
; CHECK-NEXT: ret
- %b = call fp128 @llvm.vector.reduce.fmul.f128.v1f128(fp128 0xL00000000000000003fff00000000000000, <1 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fmul.f128.v1f128(fp128 f0x3fff0000000000000000000000000000, <1 x fp128> %a)
ret fp128 %b
}
@@ -60,7 +60,7 @@ define fp128 @test_v2f128(<2 x fp128> %a) nounwind {
; CHECK-LABEL: test_v2f128:
; CHECK: // %bb.0:
; CHECK-NEXT: b __multf3
- %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 0xL00000000000000003fff00000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 f0x3fff0000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll b/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
index 9a6cfb72f28f1d..b0590f6f83ab0d 100644
--- a/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
+++ b/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
@@ -118,7 +118,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH41C8, half 0xH0>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x41C8, half f0x0000>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
diff --git a/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll b/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll
index 1416fa9033f3b1..59b49adbae0594 100644
--- a/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll
+++ b/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll
@@ -128,7 +128,7 @@ define fp128 @test_v2f128(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call fast fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 0xL00000000000000003fff00000000000000, <2 x fp128> %a)
+ %b = call fast fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 f0x3fff0000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
@@ -151,6 +151,6 @@ define fp128 @test_v2f128_strict(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 0xL00000000000000003fff00000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 f0x3fff0000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll b/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll
index bd6f234ad48eca..14f80ce4aff875 100644
--- a/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll
+++ b/llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll
@@ -42,7 +42,7 @@ define fp128 @test_v1f128(<1 x fp128> %a) nounwind {
; CHECK-LABEL: test_v1f128:
; CHECK: @ %bb.0:
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fmul.f128.v1f128(fp128 0xL00000000000000003fff00000000000000, <1 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fmul.f128.v1f128(fp128 f0x3fff0000000000000000000000000000, <1 x fp128> %a)
ret fp128 %b
}
@@ -78,7 +78,7 @@ define fp128 @test_v2f128(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r4, r5, r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 0xL00000000000000003fff00000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fmul.f128.v2f128(fp128 f0x3fff0000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll b/llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll
index d0210e30fdf625..b046e2f56a7054 100644
--- a/llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll
+++ b/llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll
@@ -1463,7 +1463,7 @@ define <64 x half> @f27() #0 {
; CHECK-NEXT: v0 = vxor(v0,v0)
; CHECK-NEXT: jumpr r31
; CHECK-NEXT: }
- %v0 = insertelement <64 x half> undef, half 0xH0, i32 0
+ %v0 = insertelement <64 x half> undef, half f0x0000, i32 0
%v1 = shufflevector <64 x half> %v0, <64 x half> undef, <64 x i32> zeroinitializer
ret <64 x half> %v1
}
@@ -1475,7 +1475,7 @@ define <128 x half> @f28() #0 {
; CHECK-NEXT: v1:0.w = vsub(v1:0.w,v1:0.w)
; CHECK-NEXT: jumpr r31
; CHECK-NEXT: }
- %v0 = insertelement <128 x half> undef, half 0xH0, i32 0
+ %v0 = insertelement <128 x half> undef, half f0x0000, i32 0
%v1 = shufflevector <128 x half> %v0, <128 x half> undef, <128 x i32> zeroinitializer
ret <128 x half> %v1
}
diff --git a/llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir b/llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir
index 6280d4e90ebf1e..f2368a40941ecf 100644
--- a/llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir
+++ b/llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir
@@ -17,8 +17,8 @@ registers:
body: |
bb.0.entry:
%0 = LD_f32_avar 0, 4, 1, 2, 32, &test_param_0
- ; CHECK: [[@LINE+1]]:33: floating point constant does not have type 'float'
- %1 = FADD_rnf32ri %0, float 0xH3C00
+ ; CHECK: [[@LINE+1]]:33: float hex literal has incorrect number of bits
+ %1 = FADD_rnf32ri %0, float f0x3C00
StoreRetvalF32 %1, 0
Return
...
>From 69fce1904ae71a792575c735c22efcc5d65907d7 Mon Sep 17 00:00:00 2001
From: Joshua Cranmer <joshua.cranmer at intel.com>
Date: Mon, 6 Jan 2025 13:14:47 -0800
Subject: [PATCH 3/3] [AsmWriter] Output floating-point literals using the nex
hew format.
LLVM IR files for tests were rewritten with the following sed commands:
sed -e 's/0x[KHR]/f0x/g' -i
sed -e 's/0x[LM]\([0-9a-fA-F]\{16\}\)\([0-9a-fA-F]\{16\}\)/f0x\2\1/g' -i
---
clang/test/C/C11/n1396.c | 40 +-
.../v8.2a-fp16-intrinsics-constrained.c | 20 +-
.../CodeGen/AArch64/v8.2a-fp16-intrinsics.c | 10 +-
.../test/CodeGen/AMDGPU/amdgpu-atomic-float.c | 24 +-
.../CodeGen/PowerPC/ppc64-complex-parms.c | 8 +-
clang/test/CodeGen/RISCV/riscv64-vararg.c | 6 +-
.../CodeGen/SystemZ/atomic_is_lock_free.c | 2 +-
clang/test/CodeGen/X86/Float16-arithmetic.c | 2 +-
clang/test/CodeGen/X86/Float16-complex.c | 58 +-
clang/test/CodeGen/X86/avx512fp16-builtins.c | 20 +-
.../test/CodeGen/X86/avx512vlfp16-builtins.c | 22 +-
.../CodeGen/X86/long-double-config-size.c | 4 +-
.../test/CodeGen/X86/x86-atomic-long_double.c | 40 +-
clang/test/CodeGen/X86/x86_64-longdouble.c | 8 +-
clang/test/CodeGen/atomic.c | 4 +-
clang/test/CodeGen/builtin-complex.c | 4 +-
clang/test/CodeGen/builtin_Float16.c | 8 +-
.../test/CodeGen/builtins-elementwise-math.c | 2 +-
clang/test/CodeGen/builtins-nvptx.c | 16 +-
clang/test/CodeGen/builtins.c | 18 +-
clang/test/CodeGen/catch-undef-behavior.c | 4 +-
clang/test/CodeGen/const-init.c | 2 +-
clang/test/CodeGen/fp16-ops-strictfp.c | 14 +-
clang/test/CodeGen/fp16-ops.c | 6 +-
clang/test/CodeGen/isfpclass.c | 2 +-
clang/test/CodeGen/math-builtins-long.c | 16 +-
clang/test/CodeGen/mingw-long-double.c | 8 +-
clang/test/CodeGen/spir-half-type.cpp | 40 +-
clang/test/CodeGenCUDA/types.cu | 2 +-
clang/test/CodeGenCXX/auto-var-init.cpp | 14 +-
.../CodeGenCXX/cxx11-user-defined-literal.cpp | 2 +-
.../test/CodeGenCXX/float128-declarations.cpp | 48 +-
.../test/CodeGenCXX/float16-declarations.cpp | 32 +-
clang/test/CodeGenCXX/ibm128-declarations.cpp | 2 +-
clang/test/CodeGenHLSL/builtins/rcp.hlsl | 8 +-
clang/test/CodeGenOpenCL/amdgpu-alignment.cl | 8 +-
clang/test/CodeGenOpenCL/half.cl | 8 +-
.../Frontend/fixed_point_conversions_half.c | 18 +-
.../Headers/__clang_hip_math_deprecated.hip | 4 +-
clang/test/OpenMP/atomic_capture_codegen.cpp | 2 +-
clang/test/OpenMP/atomic_update_codegen.cpp | 2 +-
llvm/lib/IR/AsmWriter.cpp | 13 +-
.../Analysis/CostModel/AArch64/arith-fp.ll | 6 +-
.../CostModel/AArch64/insert-extract.ll | 8 +-
.../Analysis/CostModel/AArch64/reduce-fadd.ll | 96 +-
llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll | 160 ++--
llvm/test/Analysis/CostModel/ARM/divrem.ll | 80 +-
llvm/test/Analysis/CostModel/ARM/reduce-fp.ll | 96 +-
.../Analysis/CostModel/RISCV/phi-const.ll | 2 +-
.../Analysis/CostModel/RISCV/reduce-fadd.ll | 280 +++---
.../Analysis/CostModel/RISCV/reduce-fmul.ll | 252 +++---
.../Analysis/CostModel/RISCV/rvv-phi-const.ll | 6 +-
llvm/test/Analysis/Lint/scalable.ll | 2 +-
llvm/test/Assembler/bfloat.ll | 26 +-
llvm/test/Assembler/constant-splat.ll | 20 +-
llvm/test/Assembler/half-constprop.ll | 6 +-
llvm/test/Assembler/half-conv.ll | 2 +-
llvm/test/Assembler/invalid-fp80hex.ll | 2 +-
llvm/test/Assembler/short-hexpair.ll | 2 +-
llvm/test/Assembler/unnamed.ll | 2 +-
llvm/test/Bitcode/compatibility-3.8.ll | 4 +-
llvm/test/Bitcode/compatibility-3.9.ll | 4 +-
llvm/test/Bitcode/compatibility-4.0.ll | 4 +-
llvm/test/Bitcode/compatibility-5.0.ll | 4 +-
llvm/test/Bitcode/compatibility-6.0.ll | 4 +-
llvm/test/Bitcode/compatibility.ll | 4 +-
llvm/test/Bitcode/constant-splat.ll | 20 +-
.../AArch64/GlobalISel/arm64-irtranslator.ll | 4 +-
.../AArch64/GlobalISel/combine-fabs.mir | 8 +-
.../AArch64/GlobalISel/combine-flog2.mir | 2 +-
.../GlobalISel/combine-fminimum-fmaximum.mir | 16 +-
.../GlobalISel/combine-fminnum-fmaxnum.mir | 8 +-
.../AArch64/GlobalISel/combine-fneg.mir | 8 +-
.../AArch64/GlobalISel/combine-fptrunc.mir | 4 +-
.../AArch64/GlobalISel/combine-fsqrt.mir | 2 +-
.../fp128-legalize-crash-pr35690.mir | 2 +-
.../GlobalISel/legalize-fp128-fconstant.mir | 2 +-
.../GlobalISel/legalize-fp16-fconstant.mir | 8 +-
...relegalizer-combiner-select-to-fminmax.mir | 8 +-
.../GlobalISel/select-fp16-fconstant.mir | 2 +-
llvm/test/CodeGen/AArch64/arm64-aapcs.ll | 2 +-
.../CodeGen/AArch64/arm64-build-vector.ll | 2 +-
.../test/CodeGen/AArch64/arm64-fp-imm-size.ll | 2 +-
llvm/test/CodeGen/AArch64/arm64-fp-imm.ll | 2 +-
llvm/test/CodeGen/AArch64/arm64-fp128.ll | 2 +-
llvm/test/CodeGen/AArch64/bf16-imm.ll | 16 +-
.../test/CodeGen/AArch64/bf16-instructions.ll | 6 +-
.../CodeGen/AArch64/bf16-v4-instructions.ll | 2 +-
llvm/test/CodeGen/AArch64/bf16.ll | 2 +-
llvm/test/CodeGen/AArch64/f16-imm.ll | 16 +-
llvm/test/CodeGen/AArch64/f16-instructions.ll | 6 +-
llvm/test/CodeGen/AArch64/fcopysign-noneon.ll | 2 +-
.../CodeGen/AArch64/fp16-v4-instructions.ll | 2 +-
.../CodeGen/AArch64/fp16-vector-nvcast.ll | 12 +-
.../CodeGen/AArch64/fp16_intrinsic_lane.ll | 30 +-
.../AArch64/fp16_intrinsic_scalar_1op.ll | 10 +-
llvm/test/CodeGen/AArch64/half.ll | 4 +-
llvm/test/CodeGen/AArch64/isinf.ll | 4 +-
llvm/test/CodeGen/AArch64/mattr-all.ll | 2 +-
.../CodeGen/AArch64/sve-pred-selectop3.ll | 12 +-
.../vecreduce-fadd-legalization-strict.ll | 4 +-
.../AArch64/vecreduce-fadd-legalization.ll | 4 +-
.../amdgpu-prelegalizer-combiner-crash.mir | 4 +-
.../GlobalISel/combine-fcanonicalize.mir | 24 +-
.../GlobalISel/combine-fdiv-sqrt-to-rsq.mir | 22 +-
.../GlobalISel/combine-foldable-fneg.mir | 4 +-
.../AMDGPU/GlobalISel/combine-fsub-fneg.mir | 4 +-
.../CodeGen/AMDGPU/GlobalISel/combine-rsq.mir | 4 +-
.../AMDGPU/GlobalISel/irtranslate-bf16.ll | 12 +-
.../GlobalISel/irtranslator-atomicrmw.ll | 8 +-
.../AMDGPU/GlobalISel/irtranslator-call.ll | 2 +-
.../AMDGPU/GlobalISel/legalize-fconstant.mir | 2 +-
.../AMDGPU/GlobalISel/legalize-fcos.mir | 16 +-
.../AMDGPU/GlobalISel/legalize-fdiv.mir | 4 +-
.../AMDGPU/GlobalISel/legalize-fmaxnum.mir | 8 +-
.../AMDGPU/GlobalISel/legalize-fminnum.mir | 8 +-
.../AMDGPU/GlobalISel/legalize-fsin.mir | 16 +-
.../GlobalISel/legalize-intrinsic-round.mir | 72 +-
.../AMDGPU/GlobalISel/legalize-sitofp.mir | 8 +-
.../AMDGPU/GlobalISel/legalize-uitofp.mir | 8 +-
.../GlobalISel/llvm.amdgcn.wqm.demote.ll | 4 +-
.../regbankcombiner-clamp-fmed3-const.mir | 10 +-
.../regbankcombiner-clamp-minmax-const.mir | 40 +-
.../regbankcombiner-fmed3-minmax-const.mir | 40 +-
.../GlobalISel/regbankselect-default.mir | 2 +-
llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll | 2 +-
...amdgpu-codegenprepare-fold-binop-select.ll | 2 +-
.../AMDGPU/amdgpu-simplify-libcall-pow.ll | 6 +-
.../AMDGPU/amdgpu-simplify-libcall-rootn.ll | 8 +-
llvm/test/CodeGen/AMDGPU/br_cc.f16.ll | 8 +-
.../AMDGPU/build-vector-insert-elt-infloop.ll | 2 +-
.../CodeGen/AMDGPU/dagcombine-fmul-sel.ll | 8 +-
.../CodeGen/AMDGPU/extract-subvector-16bit.ll | 12 +-
llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll | 36 +-
llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll | 4 +-
llvm/test/CodeGen/AMDGPU/fma.f16.ll | 20 +-
llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll | 2 +-
llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll | 6 +-
llvm/test/CodeGen/AMDGPU/fneg-combines.ll | 4 +-
llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll | 4 +-
llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir | 22 +-
.../AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll | 14 +-
llvm/test/CodeGen/AMDGPU/fp-classify.ll | 6 +-
llvm/test/CodeGen/AMDGPU/fract-match.ll | 60 +-
llvm/test/CodeGen/AMDGPU/imm16.ll | 16 +-
llvm/test/CodeGen/AMDGPU/immv216.ll | 16 +-
.../test/CodeGen/AMDGPU/inline-constraints.ll | 14 +-
.../AMDGPU/insert_vector_elt.v2bf16.ll | 6 +-
.../CodeGen/AMDGPU/insert_vector_elt.v2i16.ll | 6 +-
.../CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll | 4 +-
llvm/test/CodeGen/AMDGPU/mad-mix.ll | 8 +-
llvm/test/CodeGen/AMDGPU/mai-inline.ll | 2 +-
llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll | 4 +-
.../AMDGPU/multi-divergent-exit-region.ll | 4 +-
llvm/test/CodeGen/AMDGPU/pack.v2f16.ll | 10 +-
.../test/CodeGen/AMDGPU/pk_max_f16_literal.ll | 18 +-
.../CodeGen/AMDGPU/private-memory-atomics.ll | 2 +-
.../AMDGPU/promote-alloca-vector-to-vector.ll | 4 +-
.../AMDGPU/select-fabs-fneg-extract.f16.ll | 4 +-
.../AMDGPU/select-fabs-fneg-extract.v2f16.ll | 4 +-
llvm/test/CodeGen/AMDGPU/select.f16.ll | 16 +-
llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll | 4 +-
llvm/test/CodeGen/ARM/arm-half-promote.ll | 2 +-
.../ARM/armv8.2a-fp16-vector-intrinsics.ll | 8 +-
llvm/test/CodeGen/ARM/bf16-imm.ll | 4 +-
.../CodeGen/ARM/const-load-align-thumb.mir | 4 +-
.../ARM/constant-island-SOImm-limit16.mir | 4 +-
llvm/test/CodeGen/ARM/fp16-bitcast.ll | 4 +-
llvm/test/CodeGen/ARM/fp16-instructions.ll | 50 +-
llvm/test/CodeGen/ARM/fp16-litpool-arm.mir | 6 +-
llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir | 6 +-
llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir | 6 +-
llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir | 6 +-
llvm/test/CodeGen/ARM/fp16-no-condition.ll | 4 +-
llvm/test/CodeGen/ARM/fp16-v3.ll | 2 +-
llvm/test/CodeGen/ARM/pr47454.ll | 2 +-
llvm/test/CodeGen/ARM/store_half.ll | 2 +-
.../vecreduce-fadd-legalization-soft-float.ll | 4 +-
.../ARM/vecreduce-fadd-legalization-strict.ll | 4 +-
llvm/test/CodeGen/DirectX/all.ll | 2 +-
llvm/test/CodeGen/DirectX/any.ll | 2 +-
llvm/test/CodeGen/DirectX/atan2.ll | 16 +-
llvm/test/CodeGen/DirectX/degrees.ll | 2 +-
llvm/test/CodeGen/DirectX/exp.ll | 2 +-
llvm/test/CodeGen/DirectX/log.ll | 2 +-
llvm/test/CodeGen/DirectX/log10.ll | 2 +-
llvm/test/CodeGen/DirectX/radians.ll | 10 +-
llvm/test/CodeGen/DirectX/sign.ll | 4 +-
llvm/test/CodeGen/DirectX/step.ll | 8 +-
.../test/CodeGen/DirectX/vector_reduce_add.ll | 10 +-
llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll | 2 +-
.../CodeGen/Hexagon/autohvx/hfnosplat_cp.ll | 2 +-
llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll | 6 +-
.../Hexagon/autohvx/isel-mstore-fp16.ll | 2 +-
llvm/test/CodeGen/LoongArch/vararg.ll | 4 +-
.../CodeGen/MIR/Generic/bfloat-immediates.mir | 6 +-
llvm/test/CodeGen/Mips/msa/fexuprl.ll | 2 +-
llvm/test/CodeGen/NVPTX/bf16-instructions.ll | 2 +-
llvm/test/CodeGen/NVPTX/bf16.ll | 2 +-
llvm/test/CodeGen/NVPTX/half.ll | 2 +-
.../CodeGen/PowerPC/2008-05-01-ppc_fp128.ll | 4 +-
llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll | 14 +-
llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll | 2 +-
.../PowerPC/2008-10-28-UnprocessedNode.ll | 4 +-
.../CodeGen/PowerPC/2008-10-28-f128-i32.ll | 8 +-
.../PowerPC/2008-12-02-LegalizeTypeAssert.ll | 4 +-
llvm/test/CodeGen/PowerPC/aix-complex.ll | 4 +-
.../CodeGen/PowerPC/builtins-ppc-p9-f128.ll | 16 +-
llvm/test/CodeGen/PowerPC/bv-widen-undef.ll | 2 +-
llvm/test/CodeGen/PowerPC/complex-return.ll | 4 +-
llvm/test/CodeGen/PowerPC/constant-pool.ll | 16 +-
llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll | 2 +-
.../CodeGen/PowerPC/disable-ctr-ppcf128.ll | 4 +-
llvm/test/CodeGen/PowerPC/f128-aggregates.ll | 4 +-
llvm/test/CodeGen/PowerPC/f128-arith.ll | 8 +-
llvm/test/CodeGen/PowerPC/f128-compare.ll | 4 +-
llvm/test/CodeGen/PowerPC/f128-conv.ll | 12 +-
llvm/test/CodeGen/PowerPC/f128-fma.ll | 8 +-
llvm/test/CodeGen/PowerPC/f128-passByValue.ll | 4 +-
.../CodeGen/PowerPC/f128-truncateNconv.ll | 8 +-
llvm/test/CodeGen/PowerPC/float-asmprint.ll | 4 +-
.../CodeGen/PowerPC/float-load-store-pair.ll | 4 +-
llvm/test/CodeGen/PowerPC/fminnum.ll | 2 +-
llvm/test/CodeGen/PowerPC/fp-classify.ll | 6 +-
.../PowerPC/fp128-bitcast-after-operation.ll | 4 +-
.../global-address-non-got-indirect-access.ll | 4 +-
.../PowerPC/handle-f16-storage-type.ll | 2 +-
.../PowerPC/ppc32-align-long-double-sf.ll | 2 +-
.../PowerPC/ppc32-constant-BE-ppcf128.ll | 2 +-
llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll | 2 +-
.../CodeGen/PowerPC/ppc_fp128-bcwriter.ll | 4 +-
llvm/test/CodeGen/PowerPC/ppcf128-2.ll | 2 +-
llvm/test/CodeGen/PowerPC/ppcf128-4.ll | 2 +-
llvm/test/CodeGen/PowerPC/ppcf128-endian.ll | 4 +-
llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir | 2 +-
llvm/test/CodeGen/PowerPC/ppcf128sf.ll | 4 +-
llvm/test/CodeGen/PowerPC/pr15632.ll | 4 +-
llvm/test/CodeGen/PowerPC/pr16556-2.ll | 2 +-
llvm/test/CodeGen/PowerPC/pr16573.ll | 2 +-
llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll | 2 +-
.../test/CodeGen/PowerPC/resolvefi-basereg.ll | 2 +-
llvm/test/CodeGen/PowerPC/rs-undef-use.ll | 2 +-
.../CodeGen/PowerPC/scalar-min-max-p10.ll | 4 +-
llvm/test/CodeGen/PowerPC/std-unal-fi.ll | 2 +-
.../CodeGen/PowerPC/vector-reduce-fadd.ll | 8 +-
llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll | 2 +-
.../instruction-select/fp-constant-f16.mir | 2 +-
.../irtranslator/calling-conv-half.ll | 24 +-
...calling-conv-ilp32-ilp32f-ilp32d-common.ll | 20 +-
.../calling-conv-lp64-lp64f-lp64d-common.ll | 12 +-
.../GlobalISel/irtranslator/splat_vector.ll | 24 +-
.../RISCV/calling-conv-ilp32-ilp32f-common.ll | 2 +-
...calling-conv-ilp32-ilp32f-ilp32d-common.ll | 8 +-
.../test/CodeGen/RISCV/calling-conv-ilp32e.ll | 8 +-
llvm/test/CodeGen/RISCV/fp128.ll | 4 +-
llvm/test/CodeGen/RISCV/half-zfa-fli.ll | 14 +-
llvm/test/CodeGen/RISCV/stack-store-check.ll | 14 +-
llvm/test/CodeGen/RISCV/tail-calls.ll | 2 +-
llvm/test/CodeGen/RISCV/vararg.ll | 2 +-
llvm/test/CodeGen/SPARC/fp128-select.ll | 4 +-
llvm/test/CodeGen/SPARC/fp128.ll | 4 +-
.../subgroup-rotate.ll | 2 +-
.../uniform-group-instructions.ll | 4 +-
llvm/test/CodeGen/SPIRV/half_extension.ll | 2 +-
.../test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll | 8 +-
.../SPIRV/instructions/integer-casts.ll | 8 +-
.../OpExtInst-OpenCL_std-ptr-types.ll | 4 +-
.../CodeGen/SPIRV/transcoding/spec_const.ll | 2 +-
.../SPIRV/transcoding/sub_group_ballot.ll | 4 +-
.../transcoding/sub_group_clustered_reduce.ll | 8 +-
.../transcoding/sub_group_extended_types.ll | 4 +-
.../sub_group_non_uniform_arithmetic.ll | 24 +-
.../transcoding/sub_group_non_uniform_vote.ll | 2 +-
.../SPIRV/transcoding/sub_group_shuffle.ll | 4 +-
.../transcoding/sub_group_shuffle_relative.ll | 4 +-
llvm/test/CodeGen/SystemZ/args-01.ll | 4 +-
llvm/test/CodeGen/SystemZ/args-02.ll | 4 +-
llvm/test/CodeGen/SystemZ/args-03.ll | 4 +-
llvm/test/CodeGen/SystemZ/asm-10.ll | 2 +-
llvm/test/CodeGen/SystemZ/asm-17.ll | 2 +-
llvm/test/CodeGen/SystemZ/asm-19.ll | 4 +-
llvm/test/CodeGen/SystemZ/call-03.ll | 2 +-
llvm/test/CodeGen/SystemZ/call-zos-01.ll | 8 +-
llvm/test/CodeGen/SystemZ/call-zos-vararg.ll | 4 +-
llvm/test/CodeGen/SystemZ/fp-cmp-03.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-cmp-04.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-cmp-06.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-cmp-zero.ll | 4 +-
llvm/test/CodeGen/SystemZ/fp-const-01.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-02.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-05.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-07.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-08.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-09.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-const-11.ll | 12 +-
llvm/test/CodeGen/SystemZ/fp-mul-12.ll | 8 +-
llvm/test/CodeGen/SystemZ/fp-strict-cmp-03.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-strict-cmp-04.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-strict-cmp-06.ll | 2 +-
.../test/CodeGen/SystemZ/fp-strict-cmps-03.ll | 2 +-
.../test/CodeGen/SystemZ/fp-strict-cmps-06.ll | 2 +-
llvm/test/CodeGen/SystemZ/fp-strict-mul-04.ll | 20 +-
llvm/test/CodeGen/SystemZ/fp-strict-mul-12.ll | 8 +-
llvm/test/CodeGen/SystemZ/loop-03.ll | 14 +-
llvm/test/CodeGen/SystemZ/soft-float-args.ll | 2 +-
llvm/test/CodeGen/SystemZ/tdc-03.ll | 2 +-
llvm/test/CodeGen/SystemZ/vec-args-08.ll | 4 +-
llvm/test/CodeGen/SystemZ/vec-max-05.ll | 8 +-
llvm/test/CodeGen/SystemZ/vec-min-05.ll | 8 +-
.../Thumb2/LowOverheadLoops/exitcount.ll | 2 +-
llvm/test/CodeGen/Thumb2/bf16-instructions.ll | 6 +-
.../CodeGen/Thumb2/mve-float16regloops.ll | 6 +-
.../test/CodeGen/Thumb2/mve-pred-selectop3.ll | 22 +-
.../CodeGen/Thumb2/mve-vcvt-fixed-to-float.ll | 64 +-
.../CodeGen/Thumb2/mve-vcvt-float-to-fixed.ll | 72 +-
llvm/test/CodeGen/VE/Scalar/br_cc.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fabs.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fcopysign.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fcos.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fma.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fp_add.ll | 6 +-
llvm/test/CodeGen/VE/Scalar/fp_div.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fp_frem.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fp_mul.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fp_sub.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fsin.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/fsqrt.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/load_gv.ll | 2 +-
llvm/test/CodeGen/VE/Scalar/maxnum.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/minnum.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/pow.ll | 6 +-
llvm/test/CodeGen/VE/Scalar/select.ll | 4 +-
llvm/test/CodeGen/VE/Scalar/store_gv.ll | 2 +-
llvm/test/CodeGen/WebAssembly/varargs.ll | 4 +-
.../X86/2008-01-16-FPStackifierAssert.ll | 8 +-
.../CodeGen/X86/2008-10-06-x87ld-nan-1.ll | 2 +-
.../CodeGen/X86/2008-10-06-x87ld-nan-2.ll | 6 +-
.../test/CodeGen/X86/2009-02-12-SpillerBug.ll | 12 +-
.../X86/2009-03-03-BitcastLongDouble.ll | 2 +-
.../test/CodeGen/X86/2009-03-09-SpillerBug.ll | 4 +-
.../test/CodeGen/X86/2009-03-12-CPAlignBug.ll | 4 +-
llvm/test/CodeGen/X86/2010-05-07-ldconvert.ll | 2 +-
.../CodeGen/X86/2010-05-12-FastAllocKills.ll | 6 +-
.../X86/GlobalISel/regbankselect-x87.ll | 6 +-
llvm/test/CodeGen/X86/atomic-nocx16.ll | 6 +-
llvm/test/CodeGen/X86/avx10_2-cmp.ll | 6 +-
.../test/CodeGen/X86/avx512-insert-extract.ll | 2 +-
.../X86/avx512fp16-combine-shuffle-fma.ll | 2 +-
llvm/test/CodeGen/X86/avx512fp16-mov.ll | 2 +-
llvm/test/CodeGen/X86/bfloat-constrained.ll | 6 +-
llvm/test/CodeGen/X86/bfloat.ll | 2 +-
.../CodeGen/X86/build_fp16_constant_vector.ll | 4 +-
llvm/test/CodeGen/X86/byval6.ll | 4 +-
llvm/test/CodeGen/X86/cmov-fp.ll | 16 +-
llvm/test/CodeGen/X86/coff-fp-section-name.ll | 14 +-
llvm/test/CodeGen/X86/complex-fca.ll | 2 +-
llvm/test/CodeGen/X86/fake-use-hpfloat.ll | 2 +-
llvm/test/CodeGen/X86/float-asmprint.ll | 6 +-
.../X86/fold-int-pow2-with-fmul-or-fdiv.ll | 14 +-
llvm/test/CodeGen/X86/fp-stack-O0.ll | 2 +-
llvm/test/CodeGen/X86/fp128-calling-conv.ll | 2 +-
llvm/test/CodeGen/X86/fp128-cast-strict.ll | 4 +-
llvm/test/CodeGen/X86/fp128-cast.ll | 12 +-
llvm/test/CodeGen/X86/fp128-i128.ll | 6 +-
llvm/test/CodeGen/X86/fp128-libcalls.ll | 2 +-
llvm/test/CodeGen/X86/fp128-load.ll | 2 +-
llvm/test/CodeGen/X86/fp128-select.ll | 2 +-
llvm/test/CodeGen/X86/fp128-store.ll | 2 +-
llvm/test/CodeGen/X86/half-constrained.ll | 6 +-
llvm/test/CodeGen/X86/half.ll | 10 +-
llvm/test/CodeGen/X86/inline-asm-fpstack.ll | 8 +-
llvm/test/CodeGen/X86/isel-x87.ll | 2 +-
llvm/test/CodeGen/X86/ldzero.ll | 2 +-
llvm/test/CodeGen/X86/mcu-abi.ll | 2 +-
llvm/test/CodeGen/X86/pr114520.ll | 8 +-
llvm/test/CodeGen/X86/pr13577.ll | 2 +-
llvm/test/CodeGen/X86/pr33349.ll | 2 +-
llvm/test/CodeGen/X86/pr34080.ll | 4 +-
llvm/test/CodeGen/X86/pr34177.ll | 2 +-
llvm/test/CodeGen/X86/pr40529.ll | 4 +-
llvm/test/CodeGen/X86/pr43157.ll | 2 +-
llvm/test/CodeGen/X86/pr91005.ll | 2 +-
llvm/test/CodeGen/X86/select.ll | 2 +-
llvm/test/CodeGen/X86/shrink-fp-const2.ll | 2 +-
.../CodeGen/X86/soft-fp-legal-in-HW-reg.ll | 4 +-
llvm/test/CodeGen/X86/sse-fcopysign.ll | 2 +-
llvm/test/CodeGen/X86/win64-long-double.ll | 2 +-
llvm/test/CodeGen/X86/x86-32-intrcc.ll | 4 +-
llvm/test/CodeGen/X86/x86-64-intrcc.ll | 4 +-
.../COFF/AArch64/codeview-b-register.mir | 2 +-
.../COFF/AArch64/codeview-h-register.mir | 2 +-
llvm/test/DebugInfo/COFF/fortran-basic.ll | 2 +-
.../x86-fp-stackifier-drop-locations.mir | 4 +-
.../Sparc/entry-value-complex-reg-expr.ll | 2 +-
llvm/test/DebugInfo/Sparc/subreg.ll | 2 +-
.../test/DebugInfo/X86/float_const_loclist.ll | 4 +-
.../DebugInfo/X86/global-sra-fp80-array.ll | 6 +-
.../DebugInfo/X86/global-sra-fp80-struct.ll | 4 +-
.../Instrumentation/AddressSanitizer/basic.ll | 2 +-
.../Instrumentation/HeapProfiler/basic.ll | 4 +-
.../NumericalStabilitySanitizer/basic.ll | 50 +-
.../IPConstantProp/fp-bc-icmp-const-fold.ll | 4 +-
llvm/test/Transforms/Attributor/nofpclass.ll | 28 +-
.../CodeGenPrepare/AArch64/fpclass-test.ll | 8 +-
.../CodeGenPrepare/RISCV/fpclass-test.ll | 8 +-
.../CodeGenPrepare/X86/fpclass-test.ll | 16 +-
llvm/test/Transforms/EarlyCSE/atan.ll | 4 +-
llvm/test/Transforms/EarlyCSE/math-2.ll | 4 +-
.../X86/expand-large-fp-convert-si129tofp.ll | 6 +-
.../X86/expand-large-fp-convert-ui129tofp.ll | 6 +-
.../2008-11-25-APFloatAssert.ll | 2 +-
llvm/test/Transforms/Inline/simplify-fp128.ll | 6 +-
.../InstCombine/2008-02-28-OrFCmpCrash.ll | 8 +-
.../InstCombine/2009-02-04-FPBitcast.ll | 2 +-
.../AArch64/sve-intrinsic-fmul-idempotency.ll | 4 +-
.../sve-intrinsic-fmul_u-idempotency.ll | 4 +-
.../InstCombine/AMDGPU/amdgcn-intrinsics.ll | 50 +-
.../Transforms/InstCombine/AMDGPU/fmed3.ll | 10 +-
.../InstCombine/X86/2009-03-23-i80-fp80.ll | 4 +-
llvm/test/Transforms/InstCombine/and-fcmp.ll | 126 +--
.../Transforms/InstCombine/binop-itofp.ll | 24 +-
.../Transforms/InstCombine/binop-select.ll | 8 +-
.../InstCombine/bitcast-inseltpoison.ll | 4 +-
.../Transforms/InstCombine/bitcast-store.ll | 2 +-
llvm/test/Transforms/InstCombine/bitcast.ll | 4 +-
.../Transforms/InstCombine/cabs-discrete.ll | 6 +-
.../InstCombine/canonicalize-const-to-bop.ll | 6 +-
.../InstCombine/canonicalize-fcmp-inf.ll | 96 +-
.../InstCombine/cast-int-fcmp-eq-0.ll | 4 +-
llvm/test/Transforms/InstCombine/cast.ll | 2 +-
.../combine-is.fpclass-and-fcmp.ll | 64 +-
.../InstCombine/copysign-fneg-fabs.ll | 28 +-
llvm/test/Transforms/InstCombine/cos-1.ll | 2 +-
.../create-class-from-logic-fcmp.ll | 842 +++++++++---------
llvm/test/Transforms/InstCombine/exp2-1.ll | 2 +-
.../Transforms/InstCombine/exp2-to-ldexp.ll | 4 +-
llvm/test/Transforms/InstCombine/fabs.ll | 16 +-
.../InstCombine/fcmp-denormals-are-zero.ll | 44 +-
.../Transforms/InstCombine/fcmp-special.ll | 2 +-
llvm/test/Transforms/InstCombine/fcmp.ll | 56 +-
.../Transforms/InstCombine/fdiv-cos-sin.ll | 2 +-
llvm/test/Transforms/InstCombine/fma.ll | 4 +-
llvm/test/Transforms/InstCombine/fmul.ll | 6 +-
.../InstCombine/fpclass-from-dom-cond.ll | 4 +-
llvm/test/Transforms/InstCombine/fpextend.ll | 2 +-
llvm/test/Transforms/InstCombine/fptrunc.ll | 4 +-
llvm/test/Transforms/InstCombine/fsub.ll | 2 +-
.../InstCombine/log-to-intrinsic.ll | 24 +-
.../test/Transforms/InstCombine/nanl-fp128.ll | 6 +-
llvm/test/Transforms/InstCombine/nanl-fp80.ll | 6 +-
.../Transforms/InstCombine/nanl-ppc-fp128.ll | 6 +-
llvm/test/Transforms/InstCombine/pow-1.ll | 10 +-
llvm/test/Transforms/InstCombine/pow-exp.ll | 4 +-
.../Transforms/InstCombine/pow-to-ldexp.ll | 20 +-
.../Transforms/InstCombine/remquol-fp128.ll | 4 +-
.../Transforms/InstCombine/remquol-fp80.ll | 4 +-
.../InstCombine/remquol-ppc-fp128.ll | 4 +-
.../select-with-extreme-eq-cond.ll | 4 +-
.../unordered-compare-and-ordered.ll | 62 +-
llvm/test/Transforms/InstCombine/win-fdim.ll | 4 +-
.../InstSimplify/ConstProp/AMDGPU/cos.ll | 44 +-
.../InstSimplify/ConstProp/AMDGPU/fract.ll | 32 +-
.../InstSimplify/ConstProp/AMDGPU/sin.ll | 44 +-
.../Transforms/InstSimplify/ConstProp/cast.ll | 4 +-
.../ConstProp/convert-from-fp16.ll | 12 +-
.../InstSimplify/ConstProp/copysign.ll | 30 +-
.../InstSimplify/ConstProp/libfunc.ll | 4 +-
.../InstSimplify/ConstProp/loads.ll | 12 +-
.../InstSimplify/ConstProp/logf128.ll | 78 +-
.../InstSimplify/ConstProp/min-max.ll | 32 +-
.../InstSimplify/bitcast-vector-fold.ll | 2 +-
.../Transforms/InstSimplify/canonicalize.ll | 152 ++--
.../InstSimplify/constfold-constrained.ll | 4 +-
llvm/test/Transforms/InstSimplify/exp10.ll | 12 +-
.../InstSimplify/floating-point-arithmetic.ll | 6 +-
llvm/test/Transforms/InstSimplify/fp-nan.ll | 8 +-
llvm/test/Transforms/InstSimplify/frexp.ll | 12 +-
.../Transforms/InstSimplify/is_fpclass.ll | 2 +-
.../InstSimplify/known-never-infinity.ll | 50 +-
llvm/test/Transforms/InstSimplify/ldexp.ll | 26 +-
.../AMDGPU/merge-stores.ll | 4 +-
.../LoopLoadElim/type-mismatch-opaque-ptr.ll | 2 +-
.../Transforms/LoopLoadElim/type-mismatch.ll | 2 +-
.../AArch64/scalable-reductions.ll | 2 +-
.../AArch64/scalar_interleave.ll | 4 +-
.../LoopVectorize/AArch64/sve-illegal-type.ll | 6 +-
.../LoopVectorize/AMDGPU/packed-math.ll | 6 +-
.../LoopVectorize/ARM/mve-known-trip-count.ll | 2 +-
.../ARM/tail-folding-not-allowed.ll | 6 +-
.../LoopVectorize/RISCV/illegal-type.ll | 4 +-
.../RISCV/scalable-reductions.ll | 12 +-
.../LoopVectorize/X86/fp80-widest-type.ll | 4 +-
.../X86/x86_fp80-vector-store.ll | 4 +-
.../MemCpyOpt/2008-02-24-MultipleUseofSRet.ll | 8 +-
.../Transforms/MemCpyOpt/memcpy-to-memset.ll | 2 +-
llvm/test/Transforms/MemCpyOpt/memcpy.ll | 4 +-
llvm/test/Transforms/MemCpyOpt/sret.ll | 4 +-
.../Reassociate/reassoc-intermediate-fnegs.ll | 18 +-
.../Transforms/SCCP/fp-bc-icmp-const-fold.ll | 2 +-
llvm/test/Transforms/SCCP/pr50901.ll | 8 +-
llvm/test/Transforms/SCCP/sitofp.ll | 2 +-
.../extracts-from-scalarizable-vector.ll | 32 +-
.../SLPVectorizer/AArch64/gather-load-128.ll | 8 +-
.../SLPVectorizer/AArch64/reduce-fadd.ll | 10 +-
.../SLPVectorizer/AMDGPU/reduction.ll | 10 +-
.../Transforms/SLPVectorizer/NVPTX/v2f16.ll | 20 +-
.../SLPVectorizer/RISCV/reductions.ll | 2 +-
.../RISCV/strided-unsupported-type.ll | 8 +-
.../SLPVectorizer/X86/fabs-cost-softfp.ll | 6 +-
.../SLPVectorizer/scalarazied-result.ll | 4 +-
llvm/test/Transforms/SROA/ppcf128-no-fold.ll | 8 +-
llvm/test/Transforms/SROA/select-load.ll | 12 +-
llvm/test/Transforms/Scalarizer/min-bits.ll | 20 +-
.../TypePromotion/AArch64/bitcast.ll | 4 +-
.../Util/libcalls-shrinkwrap-long-double.ll | 104 +--
.../AArch64/shuffletoidentity.ll | 12 +-
.../RISCV/vpintrin-scalarization.ll | 6 +-
llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll | 8 +-
518 files changed, 3294 insertions(+), 3299 deletions(-)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
// CHECK-X64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
// CHECK-X64-NEXT: [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
// CHECK-X64-NEXT: ret float [[CONV1]]
//
@@ -42,7 +42,7 @@
// CHECK-AARCH64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-AARCH64-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-AARCH64-NEXT: ret float [[CONV1]]
//
@@ -64,7 +64,7 @@
// CHECK-PPC32-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC32-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC32-NEXT: ret float [[CONV1]]
//
@@ -75,7 +75,7 @@
// CHECK-PPC64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC64-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC64-NEXT: ret float [[CONV1]]
//
@@ -86,7 +86,7 @@
// CHECK-SPARCV9-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-SPARCV9-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-SPARCV9-NEXT: ret float [[CONV1]]
//
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
// CHECK-X64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
// CHECK-X64-NEXT: [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
// CHECK-X64-NEXT: ret float [[CONV1]]
//
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
// CHECK-AARCH64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-AARCH64-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-AARCH64-NEXT: ret float [[CONV1]]
//
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
// CHECK-PPC32-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC32-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC32-NEXT: ret float [[CONV1]]
//
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
// CHECK-PPC64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC64-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC64-NEXT: ret float [[CONV1]]
//
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
// CHECK-SPARCV9-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-SPARCV9-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-SPARCV9-NEXT: ret float [[CONV1]]
//
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
// CHECK-X64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
// CHECK-X64-NEXT: [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
// CHECK-X64-NEXT: ret float [[CONV1]]
//
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
// CHECK-AARCH64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-AARCH64-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-AARCH64-NEXT: ret float [[CONV1]]
//
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
// CHECK-PPC32-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC32-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC32-NEXT: ret float [[CONV1]]
//
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
// CHECK-PPC64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC64-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC64-NEXT: ret float [[CONV1]]
//
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
// CHECK-SPARCV9-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-SPARCV9-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-SPARCV9-NEXT: ret float [[CONV1]]
//
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
// CHECK-X64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-X64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT: [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
// CHECK-X64-NEXT: [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
// CHECK-X64-NEXT: ret float [[CONV1]]
//
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
// CHECK-AARCH64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-AARCH64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-AARCH64-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-AARCH64-NEXT: ret float [[CONV1]]
//
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
// CHECK-PPC32-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC32-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC32-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC32-NEXT: ret float [[CONV1]]
//
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
// CHECK-PPC64-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-PPC64-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT: [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
// CHECK-PPC64-NEXT: [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
// CHECK-PPC64-NEXT: ret float [[CONV1]]
//
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
// CHECK-SPARCV9-NEXT: store float [[X]], ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
// CHECK-SPARCV9-NEXT: [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT: [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
// CHECK-SPARCV9-NEXT: [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
// CHECK-SPARCV9-NEXT: ret float [[CONV1]]
//
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
#include <arm_fp16.h>
// COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED: [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED: [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
// COMMONIR: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// COMMONIR: ret i16 [[TMP2]]
uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
}
// COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED: [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED: [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
// COMMONIR: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// COMMONIR: ret i16 [[TMP2]]
uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
}
// COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED: [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED: [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
// COMMONIR: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// COMMONIR: ret i16 [[TMP2]]
uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
}
// COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED: [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED: [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
// COMMONIR: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// COMMONIR: ret i16 [[TMP2]]
uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
}
// COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED: [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED: [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED: [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
// COMMONIR: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// COMMONIR: ret i16 [[TMP2]]
uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
}
// CHECK-LABEL: test_vceqzh_f16
-// CHECK: [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK: [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
// CHECK: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// CHECK: ret i16 [[TMP2]]
uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
}
// CHECK-LABEL: test_vcgezh_f16
-// CHECK: [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK: [[TMP1:%.*]] = fcmp oge half %a, f0x0000
// CHECK: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// CHECK: ret i16 [[TMP2]]
uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
}
// CHECK-LABEL: test_vcgtzh_f16
-// CHECK: [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK: [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
// CHECK: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// CHECK: ret i16 [[TMP2]]
uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
}
// CHECK-LABEL: test_vclezh_f16
-// CHECK: [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK: [[TMP1:%.*]] = fcmp ole half %a, f0x0000
// CHECK: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// CHECK: ret i16 [[TMP2]]
uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
}
// CHECK-LABEL: test_vcltzh_f16
-// CHECK: [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK: [[TMP1:%.*]] = fcmp olt half %a, f0x0000
// CHECK: [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
// CHECK: ret i16 [[TMP2]]
uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
// SAFE-NEXT: ret half [[TMP0]]
//
// UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
// UNSAFE-NEXT: ret half [[TMP0]]
//
_Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
// SAFE-NEXT: ret half [[TMP0]]
//
// UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
// UNSAFE-NEXT: ret half [[TMP0]]
//
_Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], 0xH3C00
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half f0x3C00 seq_cst, align 2
+// SAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], f0x3C00
// SAFE-NEXT: ret half [[TMP1]]
//
// UNSAFE-LABEL: define dso_local half @test__Float16_pre_dc(
@@ -244,8 +244,8 @@ _Float16 test__Float16_post_dc()
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], 0xH3C00
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], f0x3C00
// UNSAFE-NEXT: ret half [[TMP1]]
//
_Float16 test__Float16_pre_dc()
@@ -259,8 +259,8 @@ _Float16 test__Float16_pre_dc()
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH3C00
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half f0x3C00 seq_cst, align 2
+// SAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], f0x3C00
// SAFE-NEXT: ret half [[TMP1]]
//
// UNSAFE-LABEL: define dso_local half @test__Float16_pre_inc(
@@ -268,8 +268,8 @@ _Float16 test__Float16_pre_dc()
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH3C00
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], f0x3C00
// UNSAFE-NEXT: ret half [[TMP1]]
//
_Float16 test__Float16_pre_inc()
diff --git a/clang/test/CodeGen/PowerPC/ppc64-complex-parms.c b/clang/test/CodeGen/PowerPC/ppc64-complex-parms.c
index b8f59f57b2dcd5..4ff4f3aaa0aaff 100644
--- a/clang/test/CodeGen/PowerPC/ppc64-complex-parms.c
+++ b/clang/test/CodeGen/PowerPC/ppc64-complex-parms.c
@@ -110,8 +110,8 @@ void bar_long_double(void) {
// CHECK: %[[VAR21:[A-Za-z0-9.]+]] = alloca { ppc_fp128, ppc_fp128 }, align 16
// CHECK: %[[VAR22:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 0
// CHECK: %[[VAR23:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 1
-// CHECK: store ppc_fp128 0xM40000000000000000000000000000000, ptr %[[VAR22]]
-// CHECK: store ppc_fp128 0xMC0040000000000008000000000000000, ptr %[[VAR23]]
+// CHECK: store ppc_fp128 f0x00000000000000004000000000000000, ptr %[[VAR22]]
+// CHECK: store ppc_fp128 f0x8000000000000000C004000000000000, ptr %[[VAR23]]
// CHECK: %[[VAR24:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 0
// CHECK: %[[VAR25:[A-Za-z0-9.]+]] = load ppc_fp128, ptr %[[VAR24]], align 16
// CHECK: %[[VAR26:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 1
@@ -126,8 +126,8 @@ void bar_ibm128(void) {
// CHECK: %[[VAR21:[A-Za-z0-9.]+]] = alloca { ppc_fp128, ppc_fp128 }, align 16
// CHECK: %[[VAR22:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 0
// CHECK: %[[VAR23:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 1
-// CHECK: store ppc_fp128 0xM40000000000000000000000000000000, ptr %[[VAR22]]
-// CHECK: store ppc_fp128 0xMC0040000000000008000000000000000, ptr %[[VAR23]]
+// CHECK: store ppc_fp128 f0x00000000000000004000000000000000, ptr %[[VAR22]]
+// CHECK: store ppc_fp128 f0x8000000000000000C004000000000000, ptr %[[VAR23]]
// CHECK: %[[VAR24:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 0
// CHECK: %[[VAR25:[A-Za-z0-9.]+]] = load ppc_fp128, ptr %[[VAR24]], align 16
// CHECK: %[[VAR26:[A-Za-z0-9.]+]] = getelementptr inbounds nuw { ppc_fp128, ppc_fp128 }, ptr %[[VAR21]], i32 0, i32 1
diff --git a/clang/test/CodeGen/RISCV/riscv64-vararg.c b/clang/test/CodeGen/RISCV/riscv64-vararg.c
index a278f74ca4a863..ce3c9700c8786e 100644
--- a/clang/test/CodeGen/RISCV/riscv64-vararg.c
+++ b/clang/test/CodeGen/RISCV/riscv64-vararg.c
@@ -75,7 +75,7 @@ int f_va_callee(int, ...);
// CHECK-NEXT: [[TMP2:%.*]] = load i128, ptr [[COERCE_DIVE]], align 16
// CHECK-NEXT: call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[BYVAL_TEMP]], ptr align 8 [[DOTCOMPOUNDLITERAL6]], i64 32, i1 false)
// CHECK-NEXT: [[CALL:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i64 noundef 3, double noundef 4.000000e+00, double noundef 5.000000e+00, i64 [[TMP0]], [2 x i64] [[TMP1]], i128 [[TMP2]], ptr noundef [[BYVAL_TEMP]])
-// CHECK-NEXT: [[CALL11:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, fp128 noundef 0xL00000000000000004001400000000000, i32 noundef signext 6, i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
+// CHECK-NEXT: [[CALL11:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, fp128 noundef f0x40014000000000000000000000000000, i32 noundef signext 6, i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
// CHECK-NEXT: [[A13:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL12]], i32 0, i32 0
// CHECK-NEXT: store i128 5, ptr [[A13]], align 16
// CHECK-NEXT: [[COERCE_DIVE14:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL12]], i32 0, i32 0
@@ -87,7 +87,7 @@ int f_va_callee(int, ...);
// CHECK-NEXT: store ptr null, ptr [[B18]], align 8
// CHECK-NEXT: [[TMP4:%.*]] = load [2 x i64], ptr [[DOTCOMPOUNDLITERAL16]], align 8
// CHECK-NEXT: [[CALL19:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, [2 x i64] [[TMP4]], i32 noundef signext 6, i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
-// CHECK-NEXT: [[CALL20:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, i32 noundef signext 5, fp128 noundef 0xL00000000000000004001800000000000, i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
+// CHECK-NEXT: [[CALL20:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, i32 noundef signext 5, fp128 noundef f0x40018000000000000000000000000000, i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
// CHECK-NEXT: [[A22:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL21]], i32 0, i32 0
// CHECK-NEXT: store i128 6, ptr [[A22]], align 16
// CHECK-NEXT: [[COERCE_DIVE23:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL21]], i32 0, i32 0
@@ -99,7 +99,7 @@ int f_va_callee(int, ...);
// CHECK-NEXT: store ptr null, ptr [[B27]], align 8
// CHECK-NEXT: [[TMP6:%.*]] = load [2 x i64], ptr [[DOTCOMPOUNDLITERAL25]], align 8
// CHECK-NEXT: [[CALL28:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, i32 noundef signext 5, [2 x i64] [[TMP6]], i32 noundef signext 7, i32 noundef signext 8, i32 noundef signext 9)
-// CHECK-NEXT: [[CALL29:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, i32 noundef signext 5, i32 noundef signext 6, fp128 noundef 0xL00000000000000004001C00000000000, i32 noundef signext 8, i32 noundef signext 9)
+// CHECK-NEXT: [[CALL29:%.*]] = call signext i32 (i32, ...) @f_va_callee(i32 noundef signext 1, i32 noundef signext 2, i32 noundef signext 3, i32 noundef signext 4, i32 noundef signext 5, i32 noundef signext 6, fp128 noundef f0x4001C000000000000000000000000000, i32 noundef signext 8, i32 noundef signext 9)
// CHECK-NEXT: [[A31:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL30]], i32 0, i32 0
// CHECK-NEXT: store i128 7, ptr [[A31]], align 16
// CHECK-NEXT: [[COERCE_DIVE32:%.*]] = getelementptr inbounds nuw [[STRUCT_SMALL_ALIGNED]], ptr [[DOTCOMPOUNDLITERAL30]], i32 0, i32 0
diff --git a/clang/test/CodeGen/SystemZ/atomic_is_lock_free.c b/clang/test/CodeGen/SystemZ/atomic_is_lock_free.c
index 32c436eaf36dda..ddd96ab8ae9c6f 100644
--- a/clang/test/CodeGen/SystemZ/atomic_is_lock_free.c
+++ b/clang/test/CodeGen/SystemZ/atomic_is_lock_free.c
@@ -22,7 +22,7 @@ _Atomic long double Atomic_fp128; // Also check the alignment of this.
// CHECK: @Int128_Atomic = {{.*}} i128 0, align 16
// CHECK: @Int128_Al16 = {{.*}} i128 0, align 16
// CHECK: @AtomicStruct = {{.*}} { %struct.anon, [4 x i8] } zeroinitializer, align 16
-// CHECK: @Atomic_fp128 = {{.*}} fp128 0xL00000000000000000000000000000000, align 16
+// CHECK: @Atomic_fp128 = {{.*}} fp128 f0x00000000000000000000000000000000, align 16
// CHECK-LABEL: @fun0
diff --git a/clang/test/CodeGen/X86/Float16-arithmetic.c b/clang/test/CodeGen/X86/Float16-arithmetic.c
index 064a85d5ee1263..97ffa45cb260ff 100644
--- a/clang/test/CodeGen/X86/Float16-arithmetic.c
+++ b/clang/test/CodeGen/X86/Float16-arithmetic.c
@@ -230,7 +230,7 @@ _Float16 RealOp_c(_Float16 _Complex a) {
// CHECK-NEXT: store half [[A:%.*]], ptr [[A_ADDR]], align 2
// CHECK-NEXT: [[TMP0:%.*]] = load half, ptr [[A_ADDR]], align 2
// CHECK-NEXT: [[EXT:%.*]] = fpext half [[TMP0]] to float
-// CHECK-NEXT: ret half 0xH0000
+// CHECK-NEXT: ret half f0x0000
//
_Float16 ImagOp(_Float16 a) {
return __imag a;
diff --git a/clang/test/CodeGen/X86/Float16-complex.c b/clang/test/CodeGen/X86/Float16-complex.c
index 53d44f3ae966b3..54cfa96af3b743 100644
--- a/clang/test/CodeGen/X86/Float16-complex.c
+++ b/clang/test/CodeGen/X86/Float16-complex.c
@@ -15,7 +15,7 @@
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[ADD]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP2]]
//
@@ -35,7 +35,7 @@
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP2]]
//
@@ -216,7 +216,7 @@ _Float16 _Complex add_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[ADD1]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP3:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP3]]
//
@@ -241,7 +241,7 @@ _Float16 _Complex add_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP3:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP3]]
//
@@ -655,7 +655,7 @@ _Float16 _Complex add2_haff_ccc(_Float16 _Complex a, _Float16 _Complex b, _Float
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[SUB]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP2]]
//
@@ -675,7 +675,7 @@ _Float16 _Complex add2_haff_ccc(_Float16 _Complex a, _Float16 _Complex b, _Float
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP2]]
//
@@ -854,7 +854,7 @@ _Float16 _Complex sub_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[MUL]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP2]]
//
@@ -874,7 +874,7 @@ _Float16 _Complex sub_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP2]]
//
@@ -1096,7 +1096,7 @@ _Float16 _Complex mul_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[DIV]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP2]]
//
@@ -1116,7 +1116,7 @@ _Float16 _Complex mul_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP2]]
//
@@ -1187,7 +1187,7 @@ _Float16 _Complex div_half_cr(_Float16 _Complex a, _Float16 b) {
// AVX-NEXT: [[B_REAL:%.*]] = load half, ptr [[B_REALP]], align 2
// AVX-NEXT: [[B_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[B]], i32 0, i32 1
// AVX-NEXT: [[B_IMAG:%.*]] = load half, ptr [[B_IMAGP]], align 2
-// AVX-NEXT: [[CALL:%.*]] = call <2 x half> @__divhc3(half noundef [[TMP0]], half noundef 0xH0000, half noundef [[B_REAL]], half noundef [[B_IMAG]]) #[[ATTR1]]
+// AVX-NEXT: [[CALL:%.*]] = call <2 x half> @__divhc3(half noundef [[TMP0]], half noundef f0x0000, half noundef [[B_REAL]], half noundef [[B_IMAG]]) #[[ATTR1]]
// AVX-NEXT: store <2 x half> [[CALL]], ptr [[COERCE]], align 2
// AVX-NEXT: [[COERCE_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[COERCE]], i32 0, i32 0
// AVX-NEXT: [[COERCE_REAL:%.*]] = load half, ptr [[COERCE_REALP]], align 2
@@ -1318,7 +1318,7 @@ _Float16 _Complex div_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP2]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP3:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP3]]
//
@@ -1340,7 +1340,7 @@ _Float16 _Complex div_half_cc(_Float16 _Complex a, _Float16 _Complex b) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[TMP2]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP3:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP3]]
//
@@ -1367,7 +1367,7 @@ _Float16 _Complex addcompound_half_rr(_Float16 a, _Float16 c) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP1]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP2]]
//
@@ -1393,7 +1393,7 @@ _Float16 _Complex addcompound_half_rr(_Float16 a, _Float16 c) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[TMP1]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP2:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP2]]
//
@@ -1553,7 +1553,7 @@ _Float16 _Complex addcompound_half_cc(_Float16 _Complex a, _Float16 _Complex c)
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[FNEG]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1569,7 +1569,7 @@ _Float16 _Complex addcompound_half_cc(_Float16 _Complex a, _Float16 _Complex c)
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1630,7 +1630,7 @@ _Float16 _Complex MinusOp_c(_Float16 _Complex a) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP0]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1645,7 +1645,7 @@ _Float16 _Complex MinusOp_c(_Float16 _Complex a) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1702,7 +1702,7 @@ _Float16 _Complex PlusOp_c(_Float16 _Complex a) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP0]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1717,7 +1717,7 @@ _Float16 _Complex PlusOp_c(_Float16 _Complex a) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1735,7 +1735,7 @@ _Float16 _Complex RealOp_r(_Float16 a) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP0]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1751,7 +1751,7 @@ _Float16 _Complex RealOp_r(_Float16 a) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP0:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP0]]
//
@@ -1767,8 +1767,8 @@ _Float16 _Complex RealOp_c(_Float16 _Complex a) {
// AVX-NEXT: [[TMP0:%.*]] = load half, ptr [[A_ADDR]], align 2
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_REALP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1781,8 +1781,8 @@ _Float16 _Complex RealOp_c(_Float16 _Complex a) {
// X86-NEXT: [[EXT:%.*]] = fpext half [[TMP0]] to float
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_REALP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1800,7 +1800,7 @@ _Float16 _Complex ImagOp_r(_Float16 a) {
// AVX-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// AVX-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// AVX-NEXT: store half [[TMP0]], ptr [[RETVAL_REALP]], align 2
-// AVX-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// AVX-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// AVX-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// AVX-NEXT: ret <2 x half> [[TMP1]]
//
@@ -1816,7 +1816,7 @@ _Float16 _Complex ImagOp_r(_Float16 a) {
// X86-NEXT: [[RETVAL_REALP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 0
// X86-NEXT: [[RETVAL_IMAGP:%.*]] = getelementptr inbounds nuw { half, half }, ptr [[RETVAL]], i32 0, i32 1
// X86-NEXT: store half [[UNPROMOTION]], ptr [[RETVAL_REALP]], align 2
-// X86-NEXT: store half 0xH0000, ptr [[RETVAL_IMAGP]], align 2
+// X86-NEXT: store half f0x0000, ptr [[RETVAL_IMAGP]], align 2
// X86-NEXT: [[TMP0:%.*]] = load <2 x half>, ptr [[RETVAL]], align 2
// X86-NEXT: ret <2 x half> [[TMP0]]
//
diff --git a/clang/test/CodeGen/X86/avx512fp16-builtins.c b/clang/test/CodeGen/X86/avx512fp16-builtins.c
index a766476ca92bd1..0c43787b26359a 100644
--- a/clang/test/CodeGen/X86/avx512fp16-builtins.c
+++ b/clang/test/CodeGen/X86/avx512fp16-builtins.c
@@ -3689,7 +3689,7 @@ __m128h test_mm_maskz_fmadd_sh(__mmask8 __U, __m128h __A, __m128h __B, __m128h _
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.fma.f16(half [[A]], half [[B]], half [[C]])
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fmadd_sh(__U, __A, __B, __C);
}
@@ -3702,7 +3702,7 @@ __m128h test_mm_maskz_fmadd_round_sh(__mmask8 __U, __m128h __A, __m128h __B, __m
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.x86.avx512fp16.vfmadd.f16(half [[A]], half [[B]], half [[C]], i32 11)
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fmadd_round_sh(__U, __A, __B, __C, _MM_FROUND_TO_ZERO | _MM_FROUND_NO_EXC);
}
@@ -3796,7 +3796,7 @@ __m128h test_mm_maskz_fmsub_sh(__mmask8 __U, __m128h __A, __m128h __B, __m128h _
// CHECK-NEXT: %{{.*}} = call half @llvm.fma.f16(half %{{.*}}, half %{{.*}}, half %{{.*}})
// CHECK-NEXT: %{{.*}} = bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: %{{.*}} = extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: %{{.*}} = select i1 %{{.*}}, half %{{.*}}, half 0xH0000
+ // CHECK-NEXT: %{{.*}} = select i1 %{{.*}}, half %{{.*}}, half f0x0000
// CHECK-NEXT: %{{.*}} = insertelement <8 x half> %{{.*}}, half %{{.*}}, i64 0
// CHECK-NEXT: ret <8 x half> %{{.*}}
return _mm_maskz_fmsub_sh(__U, __A, __B, __C);
@@ -3811,7 +3811,7 @@ __m128h test_mm_maskz_fmsub_round_sh(__mmask8 __U, __m128h __A, __m128h __B, __m
// CHECK-NEXT: %{{.*}} = call half @llvm.x86.avx512fp16.vfmadd.f16(half %{{.*}}, half %{{.*}}, half %{{.*}}, i32 11)
// CHECK-NEXT: %{{.*}} = bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: %{{.*}} = extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: %{{.*}} = select i1 %{{.*}}, half %{{.*}}, half 0xH0000
+ // CHECK-NEXT: %{{.*}} = select i1 %{{.*}}, half %{{.*}}, half f0x0000
// CHECK-NEXT: %{{.*}} = insertelement <8 x half> %{{.*}}, half %{{.*}}, i64 0
// CHECK-NEXT: ret <8 x half> %{{.*}}
return _mm_maskz_fmsub_round_sh(__U, __A, __B, __C, _MM_FROUND_TO_ZERO | _MM_FROUND_NO_EXC);
@@ -3905,7 +3905,7 @@ __m128h test_mm_maskz_fnmadd_sh(__mmask8 __U, __m128h __A, __m128h __B, __m128h
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.fma.f16(half [[A]], half [[B]], half [[C]])
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fnmadd_sh(__U, __A, __B, __C);
}
@@ -3919,7 +3919,7 @@ __m128h test_mm_maskz_fnmadd_round_sh(__mmask8 __U, __m128h __A, __m128h __B, __
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.x86.avx512fp16.vfmadd.f16(half [[A]], half [[B]], half [[C]], i32 11)
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fnmadd_round_sh(__U, __A, __B, __C, _MM_FROUND_TO_ZERO | _MM_FROUND_NO_EXC);
}
@@ -4015,7 +4015,7 @@ __m128h test_mm_maskz_fnmsub_sh(__mmask8 __U, __m128h __A, __m128h __B, __m128h
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.fma.f16(half [[A]], half [[B]], half [[C]])
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fnmsub_sh(__U, __A, __B, __C);
}
@@ -4030,7 +4030,7 @@ __m128h test_mm_maskz_fnmsub_round_sh(__mmask8 __U, __m128h __A, __m128h __B, __
// CHECK-NEXT: [[FMA:%.+]] = call half @llvm.x86.avx512fp16.vfmadd.f16(half [[A]], half [[B]], half [[C]], i32 11)
// CHECK-NEXT: bitcast i8 %{{.*}} to <8 x i1>
// CHECK-NEXT: extractelement <8 x i1> %{{.*}}, i64 0
- // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half 0xH0000
+ // CHECK-NEXT: [[SEL:%.+]] = select i1 %{{.*}}, half [[FMA]], half f0x0000
// CHECK-NEXT: insertelement <8 x half> [[ORIGA]], half [[SEL]], i64 0
return _mm_maskz_fnmsub_round_sh(__U, __A, __B, __C, _MM_FROUND_TO_ZERO | _MM_FROUND_NO_EXC);
}
@@ -4441,13 +4441,13 @@ __m128h test_mm_maskz_fmul_round_sch(__mmask8 __U, __m128h __A, __m128h __B) {
_Float16 test_mm512_reduce_add_ph(__m512h __W) {
// CHECK-LABEL: @test_mm512_reduce_add_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fadd.v32f16(half 0xH8000, <32 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fadd.v32f16(half f0x8000, <32 x half> %{{.*}})
return _mm512_reduce_add_ph(__W);
}
_Float16 test_mm512_reduce_mul_ph(__m512h __W) {
// CHECK-LABEL: @test_mm512_reduce_mul_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fmul.v32f16(half 0xH3C00, <32 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fmul.v32f16(half f0x3C00, <32 x half> %{{.*}})
return _mm512_reduce_mul_ph(__W);
}
diff --git a/clang/test/CodeGen/X86/avx512vlfp16-builtins.c b/clang/test/CodeGen/X86/avx512vlfp16-builtins.c
index 3a212ed6834371..c55ef277e4cd37 100644
--- a/clang/test/CodeGen/X86/avx512vlfp16-builtins.c
+++ b/clang/test/CodeGen/X86/avx512vlfp16-builtins.c
@@ -17,13 +17,13 @@ _Float16 test_mm256_cvtsh_h(__m256h __A) {
__m128h test_mm_set_sh(_Float16 __h) {
// CHECK-LABEL: @test_mm_set_sh
// CHECK: insertelement <8 x half> {{.*}}, i32 0
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 1
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 2
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 3
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 4
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 5
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 6
- // CHECK: insertelement <8 x half> %{{.*}}, half 0xH0000, i32 7
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 1
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 2
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 3
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 4
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 5
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 6
+ // CHECK: insertelement <8 x half> %{{.*}}, half f0x0000, i32 7
return _mm_set_sh(__h);
}
@@ -3030,13 +3030,13 @@ __m256h test_mm256_permutexvar_ph(__m256i __A, __m256h __B) {
_Float16 test_mm256_reduce_add_ph(__m256h __W) {
// CHECK-LABEL: @test_mm256_reduce_add_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH8000, <16 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x8000, <16 x half> %{{.*}})
return _mm256_reduce_add_ph(__W);
}
_Float16 test_mm256_reduce_mul_ph(__m256h __W) {
// CHECK-LABEL: @test_mm256_reduce_mul_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fmul.v16f16(half 0xH3C00, <16 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fmul.v16f16(half f0x3C00, <16 x half> %{{.*}})
return _mm256_reduce_mul_ph(__W);
}
@@ -3054,13 +3054,13 @@ _Float16 test_mm256_reduce_min_ph(__m256h __W) {
_Float16 test_mm_reduce_add_ph(__m128h __W) {
// CHECK-LABEL: @test_mm_reduce_add_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH8000, <8 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x8000, <8 x half> %{{.*}})
return _mm_reduce_add_ph(__W);
}
_Float16 test_mm_reduce_mul_ph(__m128h __W) {
// CHECK-LABEL: @test_mm_reduce_mul_ph
- // CHECK: call reassoc half @llvm.vector.reduce.fmul.v8f16(half 0xH3C00, <8 x half> %{{.*}})
+ // CHECK: call reassoc half @llvm.vector.reduce.fmul.v8f16(half f0x3C00, <8 x half> %{{.*}})
return _mm_reduce_mul_ph(__W);
}
diff --git a/clang/test/CodeGen/X86/long-double-config-size.c b/clang/test/CodeGen/X86/long-double-config-size.c
index 563a483ca8cd6e..a9833fc1e7b96e 100644
--- a/clang/test/CodeGen/X86/long-double-config-size.c
+++ b/clang/test/CodeGen/X86/long-double-config-size.c
@@ -6,8 +6,8 @@
long double global;
// SIZE64: @global = dso_local global double 0
-// SIZE80: @global = dso_local global x86_fp80 0xK{{0+}}, align 16
-// SIZE128: @global = dso_local global fp128 0
+// SIZE80: @global = dso_local global x86_fp80 f0x{{0+}}, align 16
+// SIZE128: @global = dso_local global fp128 f0x
long double func(long double param) {
// SIZE64: define dso_local double @func(double noundef %param)
diff --git a/clang/test/CodeGen/X86/x86-atomic-long_double.c b/clang/test/CodeGen/X86/x86-atomic-long_double.c
index 9c82784807daca..8b4224de04dd62 100644
--- a/clang/test/CodeGen/X86/x86-atomic-long_double.c
+++ b/clang/test/CodeGen/X86/x86-atomic-long_double.c
@@ -18,7 +18,7 @@
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], 0xK3FFF8000000000000000
+// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], f0x3FFF8000000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -48,7 +48,7 @@
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], 0xK3FFF8000000000000000
+// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], f0x3FFF8000000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -80,7 +80,7 @@ long double testinc(_Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], 0xKBFFF8000000000000000
+// X64-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], f0xBFFF8000000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -110,7 +110,7 @@ long double testinc(_Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], 0xKBFFF8000000000000000
+// X86-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], f0xBFFF8000000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -143,7 +143,7 @@ long double testdec(_Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], 0xK4003C800000000000000
+// X64-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], f0x4003C800000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -178,7 +178,7 @@ long double testdec(_Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], 0xK4003C800000000000000
+// X86-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], f0x4003C800000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -206,7 +206,7 @@ long double testcompassign(_Atomic long double *addr) {
// X64-NEXT: store ptr [[ADDR]], ptr [[ADDR_ADDR]], align 8
// X64-NEXT: [[TMP0:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 8
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP]], i8 0, i64 16, i1 false)
-// X64-NEXT: store x86_fp80 0xK4005E600000000000000, ptr [[ATOMIC_TEMP]], align 16
+// X64-NEXT: store x86_fp80 f0x4005E600000000000000, ptr [[ATOMIC_TEMP]], align 16
// X64-NEXT: [[TMP1:%.*]] = load i128, ptr [[ATOMIC_TEMP]], align 16
// X64-NEXT: store atomic i128 [[TMP1]], ptr [[TMP0]] seq_cst, align 16
// X64-NEXT: [[TMP2:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 8
@@ -224,7 +224,7 @@ long double testcompassign(_Atomic long double *addr) {
// X86-NEXT: store ptr [[ADDR]], ptr [[ADDR_ADDR]], align 4
// X86-NEXT: [[TMP0:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP]], i8 0, i64 12, i1 false)
-// X86-NEXT: store x86_fp80 0xK4005E600000000000000, ptr [[ATOMIC_TEMP]], align 4
+// X86-NEXT: store x86_fp80 f0x4005E600000000000000, ptr [[ATOMIC_TEMP]], align 4
// X86-NEXT: call void @__atomic_store(i32 noundef 12, ptr noundef [[TMP0]], ptr noundef [[ATOMIC_TEMP]], i32 noundef 5)
// X86-NEXT: [[TMP1:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 4
// X86-NEXT: call void @__atomic_load(i32 noundef 12, ptr noundef [[TMP1]], ptr noundef [[ATOMIC_TEMP1]], i32 noundef 5)
@@ -253,7 +253,7 @@ long double testassign(_Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], 0xK3FFF8000000000000000
+// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], f0x3FFF8000000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -283,7 +283,7 @@ long double testassign(_Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], 0xK3FFF8000000000000000
+// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP2]], f0x3FFF8000000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -314,7 +314,7 @@ long double test_volatile_inc(volatile _Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], 0xKBFFF8000000000000000
+// X64-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], f0xBFFF8000000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -344,7 +344,7 @@ long double test_volatile_inc(volatile _Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], 0xKBFFF8000000000000000
+// X86-NEXT: [[DEC:%.*]] = fadd x86_fp80 [[TMP2]], f0xBFFF8000000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -376,7 +376,7 @@ long double test_volatile_dec(volatile _Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP8:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], 0xK4003C800000000000000
+// X64-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], f0x4003C800000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP3:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -411,7 +411,7 @@ long double test_volatile_dec(volatile _Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP2:%.*]] = phi x86_fp80 [ [[TMP1]], %[[ENTRY]] ], [ [[TMP3:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], 0xK4003C800000000000000
+// X86-NEXT: [[SUB:%.*]] = fsub x86_fp80 [[TMP2]], f0x4003C800000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP2]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -439,7 +439,7 @@ long double test_volatile_compassign(volatile _Atomic long double *addr) {
// X64-NEXT: store ptr [[ADDR]], ptr [[ADDR_ADDR]], align 8
// X64-NEXT: [[TMP0:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 8
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP]], i8 0, i64 16, i1 false)
-// X64-NEXT: store x86_fp80 0xK4005E600000000000000, ptr [[ATOMIC_TEMP]], align 16
+// X64-NEXT: store x86_fp80 f0x4005E600000000000000, ptr [[ATOMIC_TEMP]], align 16
// X64-NEXT: [[TMP1:%.*]] = load i128, ptr [[ATOMIC_TEMP]], align 16
// X64-NEXT: store atomic volatile i128 [[TMP1]], ptr [[TMP0]] seq_cst, align 16
// X64-NEXT: [[TMP2:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 8
@@ -457,7 +457,7 @@ long double test_volatile_compassign(volatile _Atomic long double *addr) {
// X86-NEXT: store ptr [[ADDR]], ptr [[ADDR_ADDR]], align 4
// X86-NEXT: [[TMP0:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP]], i8 0, i64 12, i1 false)
-// X86-NEXT: store x86_fp80 0xK4005E600000000000000, ptr [[ATOMIC_TEMP]], align 4
+// X86-NEXT: store x86_fp80 f0x4005E600000000000000, ptr [[ATOMIC_TEMP]], align 4
// X86-NEXT: call void @__atomic_store(i32 noundef 12, ptr noundef [[TMP0]], ptr noundef [[ATOMIC_TEMP]], i32 noundef 5)
// X86-NEXT: [[TMP1:%.*]] = load ptr, ptr [[ADDR_ADDR]], align 4
// X86-NEXT: call void @__atomic_load(i32 noundef 12, ptr noundef [[TMP1]], ptr noundef [[ATOMIC_TEMP1]], i32 noundef 5)
@@ -483,7 +483,7 @@ long double test_volatile_assign(volatile _Atomic long double *addr) {
// X64-NEXT: br label %[[ATOMIC_OP:.*]]
// X64: [[ATOMIC_OP]]:
// X64-NEXT: [[TMP1:%.*]] = phi x86_fp80 [ [[TMP0]], %[[ENTRY]] ], [ [[TMP7:%.*]], %[[ATOMIC_OP]] ]
-// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP1]], 0xK3FFF8000000000000000
+// X64-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP1]], f0x3FFF8000000000000000
// X64-NEXT: call void @llvm.memset.p0.i64(ptr align 16 [[ATOMIC_TEMP1]], i8 0, i64 16, i1 false)
// X64-NEXT: store x86_fp80 [[TMP1]], ptr [[ATOMIC_TEMP1]], align 16
// X64-NEXT: [[TMP2:%.*]] = load i128, ptr [[ATOMIC_TEMP1]], align 16
@@ -497,7 +497,7 @@ long double test_volatile_assign(volatile _Atomic long double *addr) {
// X64-NEXT: [[TMP7]] = load x86_fp80, ptr [[ATOMIC_TEMP3]], align 16
// X64-NEXT: br i1 [[TMP6]], label %[[ATOMIC_CONT:.*]], label %[[ATOMIC_OP]]
// X64: [[ATOMIC_CONT]]:
-// X64-NEXT: [[CMP:%.*]] = fcmp oeq x86_fp80 [[INC]], 0xK3FFF8000000000000000
+// X64-NEXT: [[CMP:%.*]] = fcmp oeq x86_fp80 [[INC]], f0x3FFF8000000000000000
// X64-NEXT: [[CONV:%.*]] = zext i1 [[CMP]] to i32
// X64-NEXT: ret i32 [[CONV]]
//
@@ -512,7 +512,7 @@ long double test_volatile_assign(volatile _Atomic long double *addr) {
// X86-NEXT: br label %[[ATOMIC_OP:.*]]
// X86: [[ATOMIC_OP]]:
// X86-NEXT: [[TMP1:%.*]] = phi x86_fp80 [ [[TMP0]], %[[ENTRY]] ], [ [[TMP2:%.*]], %[[ATOMIC_OP]] ]
-// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP1]], 0xK3FFF8000000000000000
+// X86-NEXT: [[INC:%.*]] = fadd x86_fp80 [[TMP1]], f0x3FFF8000000000000000
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP1]], i8 0, i64 12, i1 false)
// X86-NEXT: store x86_fp80 [[TMP1]], ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[ATOMIC_TEMP2]], i8 0, i64 12, i1 false)
@@ -521,7 +521,7 @@ long double test_volatile_assign(volatile _Atomic long double *addr) {
// X86-NEXT: [[TMP2]] = load x86_fp80, ptr [[ATOMIC_TEMP1]], align 4
// X86-NEXT: br i1 [[CALL]], label %[[ATOMIC_CONT:.*]], label %[[ATOMIC_OP]]
// X86: [[ATOMIC_CONT]]:
-// X86-NEXT: [[CMP:%.*]] = fcmp oeq x86_fp80 [[INC]], 0xK3FFF8000000000000000
+// X86-NEXT: [[CMP:%.*]] = fcmp oeq x86_fp80 [[INC]], f0x3FFF8000000000000000
// X86-NEXT: [[CONV:%.*]] = zext i1 [[CMP]] to i32
// X86-NEXT: ret i32 [[CONV]]
//
diff --git a/clang/test/CodeGen/X86/x86_64-longdouble.c b/clang/test/CodeGen/X86/x86_64-longdouble.c
index 7446664bef5bb2..21ba1347d4062a 100644
--- a/clang/test/CodeGen/X86/x86_64-longdouble.c
+++ b/clang/test/CodeGen/X86/x86_64-longdouble.c
@@ -11,12 +11,12 @@
// Android uses fp128 for long double but other x86_64 targets use x86_fp80.
long double dataLD = 1.0L;
-// ANDROID: @dataLD ={{.*}} local_unnamed_addr global fp128 0xL00000000000000003FFF000000000000, align 16
-// GNU: @dataLD ={{.*}} local_unnamed_addr global x86_fp80 0xK3FFF8000000000000000, align 16
+// ANDROID: @dataLD ={{.*}} local_unnamed_addr global fp128 f0x3FFF0000000000000000000000000000, align 16
+// GNU: @dataLD ={{.*}} local_unnamed_addr global x86_fp80 f0x3FFF8000000000000000, align 16
long double _Complex dataLDC = {1.0L, 1.0L};
-// ANDROID: @dataLDC ={{.*}} local_unnamed_addr global { fp128, fp128 } { fp128 0xL00000000000000003FFF000000000000, fp128 0xL00000000000000003FFF000000000000 }, align 16
-// GNU: @dataLDC ={{.*}} local_unnamed_addr global { x86_fp80, x86_fp80 } { x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000 }, align 16
+// ANDROID: @dataLDC ={{.*}} local_unnamed_addr global { fp128, fp128 } { fp128 f0x3FFF0000000000000000000000000000, fp128 f0x3FFF0000000000000000000000000000 }, align 16
+// GNU: @dataLDC ={{.*}} local_unnamed_addr global { x86_fp80, x86_fp80 } { x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000 }, align 16
long double TestLD(long double x) {
return x * x;
diff --git a/clang/test/CodeGen/atomic.c b/clang/test/CodeGen/atomic.c
index 16c29e282ddd9f..44e7dd42a80d99 100644
--- a/clang/test/CodeGen/atomic.c
+++ b/clang/test/CodeGen/atomic.c
@@ -7,8 +7,8 @@
// CHECK: @[[GLOB_INT:.+]] = internal global i32 0
// CHECK: @[[GLOB_FLT:.+]] = internal global float {{[0e\+-\.]+}}, align
// CHECK: @[[GLOB_DBL:.+]] = internal global double {{[0e\+-\.]+}}, align
-// X86: @[[GLOB_LONGDBL:.+]] = internal global x86_fp80 {{[0xK]+}}, align
-// SYSTEMZ: @[[GLOB_LONGDBL:.+]] = internal global fp128 {{[0xL]+}}, align
+// X86: @[[GLOB_LONGDBL:.+]] = internal global x86_fp80 {{[f0x]+}}, align
+// SYSTEMZ: @[[GLOB_LONGDBL:.+]] = internal global fp128 {{[f0x]+}}, align
int atomic(void) {
// non-sensical test for sync functions
diff --git a/clang/test/CodeGen/builtin-complex.c b/clang/test/CodeGen/builtin-complex.c
index f9c7144b59ccb9..2f8e0e97684a55 100644
--- a/clang/test/CodeGen/builtin-complex.c
+++ b/clang/test/CodeGen/builtin-complex.c
@@ -6,8 +6,8 @@
// CHECK-FLOAT: @global ={{.*}} global { [[T:float]], [[T]] } { [[T]] 1.0{{.*}}, [[T]] 2.0{{.*}} }
// CHECK-DOUBLE: @global ={{.*}} global { [[T:double]], [[T]] } { [[T]] 1.0{{.*}}, [[T]] 2.0{{.*}} }
-// CHECK-FP80: @global ={{.*}} global { [[T:x86_fp80]], [[T]] } { [[T]] 0xK3FFF8000000000000000, [[T]] 0xK40008000000000000000 }
-// CHECK-FP128: @global ={{.*}} global { [[T:fp128]], [[T]] } { [[T]] 0xL00000000000000003FFF000000000000, [[T]] 0xL00000000000000004000000000000000 }
+// CHECK-FP80: @global ={{.*}} global { [[T:x86_fp80]], [[T]] } { [[T]] f0x3FFF8000000000000000, [[T]] f0x40008000000000000000 }
+// CHECK-FP128: @global ={{.*}} global { [[T:fp128]], [[T]] } { [[T]] f0x3FFF0000000000000000000000000000, [[T]] f0x40000000000000000000000000000000 }
_Complex T global = __builtin_complex(1.0, 2.0);
// CHECK-LABEL: @test
diff --git a/clang/test/CodeGen/builtin_Float16.c b/clang/test/CodeGen/builtin_Float16.c
index 099d2ad5697e34..ee21bcc3887125 100644
--- a/clang/test/CodeGen/builtin_Float16.c
+++ b/clang/test/CodeGen/builtin_Float16.c
@@ -6,12 +6,12 @@
void test_float16_builtins(void) {
volatile _Float16 res;
- // CHECK: store volatile half 0xH7C00, ptr %res, align 2
+ // CHECK: store volatile half f0x7C00, ptr %res, align 2
res = __builtin_huge_valf16();
- // CHECK: store volatile half 0xH7C00, ptr %res, align 2
+ // CHECK: store volatile half f0x7C00, ptr %res, align 2
res = __builtin_inff16();
- // CHECK: store volatile half 0xH7E00, ptr %res, align 2
+ // CHECK: store volatile half f0x7E00, ptr %res, align 2
res = __builtin_nanf16("");
- // CHECK: store volatile half 0xH7D00, ptr %res, align 2
+ // CHECK: store volatile half f0x7D00, ptr %res, align 2
res = __builtin_nansf16("");
}
diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c
index 7f6b5f26eb9307..ae440adb87b32d 100644
--- a/clang/test/CodeGen/builtins-elementwise-math.c
+++ b/clang/test/CodeGen/builtins-elementwise-math.c
@@ -1033,7 +1033,7 @@ void test_builtin_elementwise_fma(float f32, double f64,
// CHECK: [[V2F16_0:%.+]] = load <2 x half>, ptr %v2f16.addr
// CHECK-NEXT: [[V2F16_1:%.+]] = load <2 x half>, ptr %v2f16.addr
- // CHECK-NEXT: call <2 x half> @llvm.fma.v2f16(<2 x half> [[V2F16_0]], <2 x half> [[V2F16_1]], <2 x half> splat (half 0xH4400))
+ // CHECK-NEXT: call <2 x half> @llvm.fma.v2f16(<2 x half> [[V2F16_0]], <2 x half> [[V2F16_1]], <2 x half> splat (half f0x4400))
half2 tmp2_v2f16 = __builtin_elementwise_fma(v2f16, v2f16, (half2)4.0);
}
diff --git a/clang/test/CodeGen/builtins-nvptx.c b/clang/test/CodeGen/builtins-nvptx.c
index 163aee4799ff0e..60208279eec9bb 100644
--- a/clang/test/CodeGen/builtins-nvptx.c
+++ b/clang/test/CodeGen/builtins-nvptx.c
@@ -999,13 +999,13 @@ __device__ void nvvm_cvt_sm89() {
// CHECK_PTX81_SM89: call i16 @llvm.nvvm.ff.to.e5m2x2.rn.relu(float 1.000000e+00, float 1.000000e+00)
__nvvm_ff_to_e5m2x2_rn_relu(1.0f, 1.0f);
- // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e4m3x2.rn(<2 x half> splat (half 0xH3C00))
+ // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e4m3x2.rn(<2 x half> splat (half f0x3C00))
__nvvm_f16x2_to_e4m3x2_rn({1.0f16, 1.0f16});
- // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e4m3x2.rn.relu(<2 x half> splat (half 0xH3C00))
+ // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e4m3x2.rn.relu(<2 x half> splat (half f0x3C00))
__nvvm_f16x2_to_e4m3x2_rn_relu({1.0f16, 1.0f16});
- // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e5m2x2.rn(<2 x half> splat (half 0xH3C00))
+ // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e5m2x2.rn(<2 x half> splat (half f0x3C00))
__nvvm_f16x2_to_e5m2x2_rn({1.0f16, 1.0f16});
- // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e5m2x2.rn.relu(<2 x half> splat (half 0xH3C00))
+ // CHECK_PTX81_SM89: call i16 @llvm.nvvm.f16x2.to.e5m2x2.rn.relu(<2 x half> splat (half f0x3C00))
__nvvm_f16x2_to_e5m2x2_rn_relu({1.0f16, 1.0f16});
// CHECK_PTX81_SM89: call <2 x half> @llvm.nvvm.e4m3x2.to.f16x2.rn(i16 18504)
@@ -1033,14 +1033,14 @@ __device__ void nvvm_cvt_sm89() {
__device__ void nvvm_abs_neg_bf16_bf16x2_sm80() {
#if __CUDA_ARCH__ >= 800
- // CHECK_PTX70_SM80: call bfloat @llvm.nvvm.abs.bf16(bfloat 0xR3DCD)
+ // CHECK_PTX70_SM80: call bfloat @llvm.nvvm.abs.bf16(bfloat f0x3DCD)
__nvvm_abs_bf16(BF16);
- // CHECK_PTX70_SM80: call <2 x bfloat> @llvm.nvvm.abs.bf16x2(<2 x bfloat> splat (bfloat 0xR3DCD))
+ // CHECK_PTX70_SM80: call <2 x bfloat> @llvm.nvvm.abs.bf16x2(<2 x bfloat> splat (bfloat f0x3DCD))
__nvvm_abs_bf16x2(BF16X2);
- // CHECK_PTX70_SM80: call bfloat @llvm.nvvm.neg.bf16(bfloat 0xR3DCD)
+ // CHECK_PTX70_SM80: call bfloat @llvm.nvvm.neg.bf16(bfloat f0x3DCD)
__nvvm_neg_bf16(BF16);
- // CHECK_PTX70_SM80: call <2 x bfloat> @llvm.nvvm.neg.bf16x2(<2 x bfloat> splat (bfloat 0xR3DCD))
+ // CHECK_PTX70_SM80: call <2 x bfloat> @llvm.nvvm.neg.bf16x2(<2 x bfloat> splat (bfloat f0x3DCD))
__nvvm_neg_bf16x2(BF16X2);
#endif
// CHECK: ret void
diff --git a/clang/test/CodeGen/builtins.c b/clang/test/CodeGen/builtins.c
index eda6c67fdad00c..c98a270d271818 100644
--- a/clang/test/CodeGen/builtins.c
+++ b/clang/test/CodeGen/builtins.c
@@ -178,19 +178,19 @@ void bar(void) {
f = __builtin_huge_valf(); // CHECK: float 0x7FF0000000000000
d = __builtin_huge_val(); // CHECK: double 0x7FF0000000000000
- ld = __builtin_huge_vall(); // CHECK: x86_fp80 0xK7FFF8000000000000000
+ ld = __builtin_huge_vall(); // CHECK: x86_fp80 f0x7FFF8000000000000000
f = __builtin_nanf(""); // CHECK: float 0x7FF8000000000000
d = __builtin_nan(""); // CHECK: double 0x7FF8000000000000
- ld = __builtin_nanl(""); // CHECK: x86_fp80 0xK7FFFC000000000000000
+ ld = __builtin_nanl(""); // CHECK: x86_fp80 f0x7FFFC000000000000000
f = __builtin_nanf("0xAE98"); // CHECK: float 0x7FF815D300000000
d = __builtin_nan("0xAE98"); // CHECK: double 0x7FF800000000AE98
- ld = __builtin_nanl("0xAE98"); // CHECK: x86_fp80 0xK7FFFC00000000000AE98
+ ld = __builtin_nanl("0xAE98"); // CHECK: x86_fp80 f0x7FFFC00000000000AE98
f = __builtin_nansf(""); // CHECK: float 0x7FF4000000000000
d = __builtin_nans(""); // CHECK: double 0x7FF4000000000000
- ld = __builtin_nansl(""); // CHECK: x86_fp80 0xK7FFFA000000000000000
+ ld = __builtin_nansl(""); // CHECK: x86_fp80 f0x7FFFA000000000000000
f = __builtin_nansf("0xAE98"); // CHECK: float 0x7FF015D300000000
d = __builtin_nans("0xAE98"); // CHECK: double 0x7FF000000000AE98
- ld = __builtin_nansl("0xAE98");// CHECK: x86_fp80 0xK7FFF800000000000AE98
+ ld = __builtin_nansl("0xAE98");// CHECK: x86_fp80 f0x7FFF800000000000AE98
}
// CHECK: }
@@ -245,7 +245,7 @@ void test_float_builtins(__fp16 *H, float F, double D, long double LD) {
res = __builtin_isinf_sign(*H);
// CHECK: %[[ABS:.*]] = call half @llvm.fabs.f16(half %[[ARG:.*]])
- // CHECK: %[[ISINF:.*]] = fcmp oeq half %[[ABS]], 0xH7C00
+ // CHECK: %[[ISINF:.*]] = fcmp oeq half %[[ABS]], f0x7C00
// CHECK: %[[BITCAST:.*]] = bitcast half %[[ARG]] to i16
// CHECK: %[[ISNEG:.*]] = icmp slt i16 %[[BITCAST]], 0
// CHECK: %[[SIGN:.*]] = select i1 %[[ISNEG]], i32 -1, i32 1
@@ -269,7 +269,7 @@ void test_float_builtins(__fp16 *H, float F, double D, long double LD) {
res = __builtin_isinf_sign(LD);
// CHECK: %[[ABS:.*]] = call x86_fp80 @llvm.fabs.f80(x86_fp80 %[[ARG:.*]])
- // CHECK: %[[ISINF:.*]] = fcmp oeq x86_fp80 %[[ABS]], 0xK7FFF8000000000000000
+ // CHECK: %[[ISINF:.*]] = fcmp oeq x86_fp80 %[[ABS]], f0x7FFF8000000000000000
// CHECK: %[[BITCAST:.*]] = bitcast x86_fp80 %[[ARG]] to i80
// CHECK: %[[ISNEG:.*]] = icmp slt i80 %[[BITCAST]], 0
// CHECK: %[[SIGN:.*]] = select i1 %[[ISNEG]], i32 -1, i32 1
@@ -384,7 +384,7 @@ void test_float_builtin_ops(float F, double D, long double LD, int I) {
//FIXME: __builtin_fminimum_numl is not supported well yet.
resld = __builtin_fminimum_numl(1.0, 2.0);
- // CHECK: store volatile x86_fp80 0xK3FFF8000000000000000, ptr %resld, align 16
+ // CHECK: store volatile x86_fp80 f0x3FFF8000000000000000, ptr %resld, align 16
resf = __builtin_fmaximum_numf(F, F);
// CHECK: call float @llvm.maximumnum.f32
@@ -410,7 +410,7 @@ void test_float_builtin_ops(float F, double D, long double LD, int I) {
//FIXME: __builtin_fmaximum_numl is not supported well yet.
resld = __builtin_fmaximum_numl(1.0, 2.0);
- // CHECK: store volatile x86_fp80 0xK40008000000000000000, ptr %resld, align 16
+ // CHECK: store volatile x86_fp80 f0x40008000000000000000, ptr %resld, align 16
resf = __builtin_fabsf(F);
// CHECK: call float @llvm.fabs.f32
diff --git a/clang/test/CodeGen/catch-undef-behavior.c b/clang/test/CodeGen/catch-undef-behavior.c
index 7580290b0b0333..3ca87913b03246 100644
--- a/clang/test/CodeGen/catch-undef-behavior.c
+++ b/clang/test/CodeGen/catch-undef-behavior.c
@@ -241,8 +241,8 @@ int float_int_overflow(float f) {
int long_double_int_overflow(long double ld) {
// CHECK-UBSAN: alloca x86_fp80
- // CHECK-COMMON: %[[GE:.*]] = fcmp ogt x86_fp80 %[[F:.*]], 0xKC01E800000010000000
- // CHECK-COMMON: %[[LE:.*]] = fcmp olt x86_fp80 %[[F]], 0xK401E800000000000000
+ // CHECK-COMMON: %[[GE:.*]] = fcmp ogt x86_fp80 %[[F:.*]], f0xC01E800000010000000
+ // CHECK-COMMON: %[[LE:.*]] = fcmp olt x86_fp80 %[[F]], f0x401E800000000000000
// CHECK-COMMON: %[[INBOUNDS:.*]] = and i1 %[[GE]], %[[LE]]
// CHECK-COMMON-NEXT: br i1 %[[INBOUNDS]]
diff --git a/clang/test/CodeGen/const-init.c b/clang/test/CodeGen/const-init.c
index 175d221ad410a6..13590fb3adbc1a 100644
--- a/clang/test/CodeGen/const-init.c
+++ b/clang/test/CodeGen/const-init.c
@@ -141,7 +141,7 @@ void g28(void) {
typedef long double v2f80 __attribute((vector_size(24)));
// CHECK: @g28.a = internal global <1 x i64> splat (i64 10)
// @g28.b = internal global <12 x i16> <i16 0, i16 0, i16 0, i16 -32768, i16 16383, i16 0, i16 0, i16 0, i16 0, i16 -32768, i16 16384, i16 0>
- // @g28.c = internal global <2 x x86_fp80> <x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK40008000000000000000>, align 32
+ // @g28.c = internal global <2 x x86_fp80> <x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x40008000000000000000>, align 32
static v1i64 a = (v1i64)10LL;
//FIXME: support constant bitcast between vectors of x86_fp80
//static v12i16 b = (v12i16)(v2f80){1,2};
diff --git a/clang/test/CodeGen/fp16-ops-strictfp.c b/clang/test/CodeGen/fp16-ops-strictfp.c
index 25753e5b98bebd..da4a842cf67207 100644
--- a/clang/test/CodeGen/fp16-ops-strictfp.c
+++ b/clang/test/CodeGen/fp16-ops-strictfp.c
@@ -39,7 +39,7 @@ void foo(void) {
// CHECK: store {{.*}} half {{.*}}, ptr
h0 = (test);
- // NATIVE-HALF: call i1 @llvm.experimental.constrained.fcmp.f16(half %{{.*}}, half 0xH0000, metadata !"une", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call i1 @llvm.experimental.constrained.fcmp.f16(half %{{.*}}, half f0x0000, metadata !"une", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call i1 @llvm.experimental.constrained.fcmp.f32(float %{{.*}}, float 0.000000e+00, metadata !"une", metadata !"fpexcept.strict")
// CHECK: store {{.*}} i32 {{.*}}, ptr
@@ -59,28 +59,28 @@ void foo(void) {
// NOTNATIVE: store {{.*}} half {{.*}}, ptr
h1 = +h1;
- // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half 0xH3C00, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half f0x3C00, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fadd.f32(float %{{.*}}, float {{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call half @llvm.experimental.constrained.fptrunc.f16.f32(float %{{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// CHECK: store {{.*}} half {{.*}}, ptr
h1++;
- // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half 0xH3C00, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half f0x3C00, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fadd.f32(float %{{.*}}, float {{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call half @llvm.experimental.constrained.fptrunc.f16.f32(float %{{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// CHECK: store {{.*}} half {{.*}}, ptr
++h1;
- // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half 0xHBC00, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half f0xBC00, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fadd.f32(float %{{.*}}, float {{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call half @llvm.experimental.constrained.fptrunc.f16.f32(float %{{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// CHECK: store {{.*}} half {{.*}}, ptr
--h1;
- // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half 0xHBC00, metadata !"round.tonearest", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call half @llvm.experimental.constrained.fadd.f16(half %{{.*}}, half f0xBC00, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fadd.f32(float %{{.*}}, float {{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
// NOTNATIVE: call half @llvm.experimental.constrained.fptrunc.f16.f32(float %{{.*}}, metadata !"round.tonearest", metadata !"fpexcept.strict")
@@ -491,7 +491,7 @@ void foo(void) {
// CHECK: store {{.*}} i32 {{.*}}, ptr
test = (h0 != i0);
- // NATIVE-HALF: call i1 @llvm.experimental.constrained.fcmp.f16(half %{{.*}}, half 0xH0000, metadata !"une", metadata !"fpexcept.strict")
+ // NATIVE-HALF: call i1 @llvm.experimental.constrained.fcmp.f16(half %{{.*}}, half f0x0000, metadata !"une", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
// NOTNATIVE: call i1 @llvm.experimental.constrained.fcmp.f32(float %{{.*}}, float {{.*}}, metadata !"une", metadata !"fpexcept.strict")
// NOTNATIVE: call float @llvm.experimental.constrained.fpext.f32.f16(half %{{.*}}, metadata !"fpexcept.strict")
@@ -502,7 +502,7 @@ void foo(void) {
// Check assignments (inc. compound)
// CHECK: store {{.*}} half {{.*}}, ptr
- // xATIVE-HALF: store {{.*}} half 0xHC000 // FIXME: We should be folding here.
+ // xATIVE-HALF: store {{.*}} half f0xC000 // FIXME: We should be folding here.
h0 = h1;
// CHECK: call half @llvm.experimental.constrained.fptrunc.f16.f32(float -2.000000e+00, metadata !"round.tonearest", metadata !"fpexcept.strict")
diff --git a/clang/test/CodeGen/fp16-ops.c b/clang/test/CodeGen/fp16-ops.c
index 4c206690a7518e..2c1287de829ecb 100644
--- a/clang/test/CodeGen/fp16-ops.c
+++ b/clang/test/CodeGen/fp16-ops.c
@@ -355,12 +355,12 @@ void foo(void) {
// CHECK: [[F16TOF32]]
// CHECK: [[F16TOF32]]
// CHECK: [[F32TOF16]]
- // NATIVE-HALF: fcmp une half {{.*}}, 0xH0000
+ // NATIVE-HALF: fcmp une half {{.*}}, f0x0000
h1 = (h1 ? h2 : h0);
// Check assignments (inc. compound)
h0 = h1;
- // NOTNATIVE: store {{.*}} half 0xHC000
- // NATIVE-HALF: store {{.*}} half 0xHC000
+ // NOTNATIVE: store {{.*}} half f0xC000
+ // NATIVE-HALF: store {{.*}} half f0xC000
h0 = (__fp16)-2.0f;
// CHECK: [[F32TOF16]]
// NATIVE-HALF: fptrunc float
diff --git a/clang/test/CodeGen/isfpclass.c b/clang/test/CodeGen/isfpclass.c
index 1bf60b8fbca176..38431543e572a2 100644
--- a/clang/test/CodeGen/isfpclass.c
+++ b/clang/test/CodeGen/isfpclass.c
@@ -68,7 +68,7 @@ _Bool check_isfpclass_snan_f64_strict(double x) {
// CHECK-LABEL: define dso_local noundef i1 @check_isfpclass_zero_f16
// CHECK-SAME: (half noundef [[X:%.*]]) local_unnamed_addr #[[ATTR0]] {
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = fcmp oeq half [[X]], 0xH0000
+// CHECK-NEXT: [[TMP0:%.*]] = fcmp oeq half [[X]], f0x0000
// CHECK-NEXT: ret i1 [[TMP0]]
//
_Bool check_isfpclass_zero_f16(_Float16 x) {
diff --git a/clang/test/CodeGen/math-builtins-long.c b/clang/test/CodeGen/math-builtins-long.c
index 183349e0f01734..94f163ef38b91e 100644
--- a/clang/test/CodeGen/math-builtins-long.c
+++ b/clang/test/CodeGen/math-builtins-long.c
@@ -40,16 +40,16 @@ void foo(long double f, long double *l, int *i, const char *c) {
// PPCF128: call { fp128, i32 } @llvm.frexp.f128.i32(fp128 %{{.+}})
__builtin_frexpl(f,i);
- // F80: store x86_fp80 0xK7FFF8000000000000000, ptr
- // PPC: store ppc_fp128 0xM7FF00000000000000000000000000000, ptr
- // X86F128: store fp128 0xL00000000000000007FFF000000000000, ptr
- // PPCF128: store fp128 0xL00000000000000007FFF000000000000, ptr
+ // F80: store x86_fp80 f0x7FFF8000000000000000, ptr
+ // PPC: store ppc_fp128 f0x00000000000000007FF0000000000000, ptr
+ // X86F128: store fp128 f0x7FFF0000000000000000000000000000, ptr
+ // PPCF128: store fp128 f0x7FFF0000000000000000000000000000, ptr
*l = __builtin_huge_vall();
- // F80: store x86_fp80 0xK7FFF8000000000000000, ptr
- // PPC: store ppc_fp128 0xM7FF00000000000000000000000000000, ptr
- // X86F128: store fp128 0xL00000000000000007FFF000000000000, ptr
- // PPCF128: store fp128 0xL00000000000000007FFF000000000000, ptr
+ // F80: store x86_fp80 f0x7FFF8000000000000000, ptr
+ // PPC: store ppc_fp128 f0x00000000000000007FF0000000000000, ptr
+ // X86F128: store fp128 f0x7FFF0000000000000000000000000000, ptr
+ // PPCF128: store fp128 f0x7FFF0000000000000000000000000000, ptr
*l = __builtin_infl();
// F80: call x86_fp80 @ldexpl(x86_fp80 noundef %{{.+}}, i32 noundef %{{.+}})
diff --git a/clang/test/CodeGen/mingw-long-double.c b/clang/test/CodeGen/mingw-long-double.c
index 0fc8f015096827..64cdf537a05c02 100644
--- a/clang/test/CodeGen/mingw-long-double.c
+++ b/clang/test/CodeGen/mingw-long-double.c
@@ -16,13 +16,13 @@ struct {
// MSC64: @agggregate_LD = dso_local global { i8, [7 x i8], double } zeroinitializer, align 8
long double dataLD = 1.0L;
-// GNU32: @dataLD = dso_local global x86_fp80 0xK3FFF8000000000000000, align 4
-// GNU64: @dataLD = dso_local global x86_fp80 0xK3FFF8000000000000000, align 16
+// GNU32: @dataLD = dso_local global x86_fp80 f0x3FFF8000000000000000, align 4
+// GNU64: @dataLD = dso_local global x86_fp80 f0x3FFF8000000000000000, align 16
// MSC64: @dataLD = dso_local global double 1.000000e+00, align 8
long double _Complex dataLDC = {1.0L, 1.0L};
-// GNU32: @dataLDC = dso_local global { x86_fp80, x86_fp80 } { x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000 }, align 4
-// GNU64: @dataLDC = dso_local global { x86_fp80, x86_fp80 } { x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000 }, align 16
+// GNU32: @dataLDC = dso_local global { x86_fp80, x86_fp80 } { x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000 }, align 4
+// GNU64: @dataLDC = dso_local global { x86_fp80, x86_fp80 } { x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000 }, align 16
// MSC64: @dataLDC = dso_local global { double, double } { double 1.000000e+00, double 1.000000e+00 }, align 8
long double TestLD(long double x) {
diff --git a/clang/test/CodeGen/spir-half-type.cpp b/clang/test/CodeGen/spir-half-type.cpp
index d7c8bd9240abb0..ca1b9aa5657749 100644
--- a/clang/test/CodeGen/spir-half-type.cpp
+++ b/clang/test/CodeGen/spir-half-type.cpp
@@ -15,40 +15,40 @@ bool fcmp_const() {
// CHECK-NOT: llvm.convert.from.fp16
// CHECK: [[REG1:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp olt half [[REG1]], 0xH3C00
+ // CHECK-NEXT: fcmp olt half [[REG1]], f0x3C00
// CHECK: [[REG2:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp olt half [[REG2]], 0xH4000
+ // CHECK-NEXT: fcmp olt half [[REG2]], f0x4000
// CHECK: [[REG3:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp ogt half [[REG3]], 0xH3C00
+ // CHECK-NEXT: fcmp ogt half [[REG3]], f0x3C00
// CHECK: [[REG4:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp ogt half [[REG4]], 0xH4200
+ // CHECK-NEXT: fcmp ogt half [[REG4]], f0x4200
// CHECK: [[REG5:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp oeq half [[REG5]], 0xH3C00
+ // CHECK-NEXT: fcmp oeq half [[REG5]], f0x3C00
// CHECK: [[REG7:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp oeq half [[REG7]], 0xH4400
+ // CHECK-NEXT: fcmp oeq half [[REG7]], f0x4400
// CHECK: [[REG8:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp une half [[REG8]], 0xH3C00
+ // CHECK-NEXT: fcmp une half [[REG8]], f0x3C00
// CHECK: [[REG9:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp une half [[REG9]], 0xH4500
+ // CHECK-NEXT: fcmp une half [[REG9]], f0x4500
// CHECK: [[REG10:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp ole half [[REG10]], 0xH3C00
+ // CHECK-NEXT: fcmp ole half [[REG10]], f0x3C00
// CHECK: [[REG11:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp ole half [[REG11]], 0xH4600
+ // CHECK-NEXT: fcmp ole half [[REG11]], f0x4600
// CHECK: [[REG12:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp oge half [[REG12]], 0xH3C00
+ // CHECK-NEXT: fcmp oge half [[REG12]], f0x3C00
// CHECK: [[REG13:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: fcmp oge half [[REG13]], 0xH4700
+ // CHECK-NEXT: fcmp oge half [[REG13]], f0x4700
return a < b || a < 2.0f16 || a > b || a > 3.0f16 || a == b || a == 4.0f16 ||
a != b || a != 5.0f16 || a <= b || a <= 6.0f16 || a >= b ||
a >= 7.0f16;
@@ -94,8 +94,8 @@ _Float16 fadd() {
// CHECK-NOT: llvm.convert.from.fp16
// CHECK: [[REG1:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: [[REG2:%.*]] = fadd half [[REG1]], 0xH4000
- // CHECK-NEXT: [[REG3:%.*]] = fadd half [[REG2]], 0xH4200
+ // CHECK-NEXT: [[REG2:%.*]] = fadd half [[REG1]], f0x4000
+ // CHECK-NEXT: [[REG3:%.*]] = fadd half [[REG2]], f0x4200
// CHECK-NEXT: ret half [[REG3]]
return a + b + 3.0f16;
}
@@ -108,8 +108,8 @@ _Float16 fsub() {
// CHECK-NOT: llvm.convert.from.fp16
// CHECK: [[REG1:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: [[REG2:%.*]] = fsub half [[REG1]], 0xH4000
- // CHECK-NEXT: [[REG3:%.*]] = fsub half [[REG2]], 0xH4200
+ // CHECK-NEXT: [[REG2:%.*]] = fsub half [[REG1]], f0x4000
+ // CHECK-NEXT: [[REG3:%.*]] = fsub half [[REG2]], f0x4200
// CHECK-NEXT: ret half [[REG3]]
return a - b - 3.0f16;
}
@@ -125,8 +125,8 @@ _Float16 fmul(_Float16 arg) {
// CHECK: [[REG1:%.*]] = load half, ptr %a, align 2
// CHECK-NEXT: [[REG2:%.*]] = load half, ptr %arg.addr, align 2
// CHECK-NEXT: [[REG3:%.*]] = fmul half [[REG1]], [[REG2]]
- // CHECK-NEXT: [[REG4:%.*]] = fmul half [[REG3]], 0xH4000
- // CHECK-NEXT: [[REG5:%.*]] = fmul half [[REG4]], 0xH4200
+ // CHECK-NEXT: [[REG4:%.*]] = fmul half [[REG3]], f0x4000
+ // CHECK-NEXT: [[REG5:%.*]] = fmul half [[REG4]], f0x4200
// CHECK-NEXT: ret half [[REG5]]
return a * arg * b * 3.0f16;
}
@@ -139,8 +139,8 @@ _Float16 fdiv() {
// CHECK-NOT: llvm.convert.from.fp16
// CHECK: [[REG1:%.*]] = load half, ptr %a, align 2
- // CHECK-NEXT: [[REG2:%.*]] = fdiv half [[REG1]], 0xH4000
- // CHECK-NEXT: [[REG3:%.*]] = fdiv half [[REG2]], 0xH4200
+ // CHECK-NEXT: [[REG2:%.*]] = fdiv half [[REG1]], f0x4000
+ // CHECK-NEXT: [[REG3:%.*]] = fdiv half [[REG2]], f0x4200
// CHECK-NEXT: ret half [[REG3]]
return a / b / 3.0f16;
}
diff --git a/clang/test/CodeGenCUDA/types.cu b/clang/test/CodeGenCUDA/types.cu
index ee7ab717aac5a9..e86ebf3c62c01b 100644
--- a/clang/test/CodeGenCUDA/types.cu
+++ b/clang/test/CodeGenCUDA/types.cu
@@ -3,7 +3,7 @@
#include "Inputs/cuda.h"
-// HOST: @ld_host ={{.*}} global x86_fp80 0xK00000000000000000000
+// HOST: @ld_host ={{.*}} global x86_fp80 f0x00000000000000000000
long double ld_host;
// DEV: @ld_device ={{.*}} addrspace(1) externally_initialized global double 0.000000e+00
diff --git a/clang/test/CodeGenCXX/auto-var-init.cpp b/clang/test/CodeGenCXX/auto-var-init.cpp
index 94386e44573b5f..00a0b9e0aa1eda 100644
--- a/clang/test/CodeGenCXX/auto-var-init.cpp
+++ b/clang/test/CodeGenCXX/auto-var-init.cpp
@@ -443,14 +443,14 @@ TEST_UNINIT(fp16, __fp16);
// CHECK: %uninit = alloca half, align
// CHECK-NEXT: call void @{{.*}}used{{.*}}%uninit)
// PATTERN-LABEL: @test_fp16_uninit()
-// PATTERN: store half 0xHFFFF, ptr %uninit, align 2, !annotation [[AUTO_INIT]]
+// PATTERN: store half f0xFFFF, ptr %uninit, align 2, !annotation [[AUTO_INIT]]
// ZERO-LABEL: @test_fp16_uninit()
-// ZERO: store half 0xH0000, ptr %uninit, align 2, !annotation [[AUTO_INIT]]
+// ZERO: store half f0x0000, ptr %uninit, align 2, !annotation [[AUTO_INIT]]
TEST_BRACES(fp16, __fp16);
// CHECK-LABEL: @test_fp16_braces()
// CHECK: %braces = alloca half, align [[ALIGN:[0-9]*]]
-// CHECK-NEXT: store half 0xH0000, ptr %braces, align [[ALIGN]]
+// CHECK-NEXT: store half f0x0000, ptr %braces, align [[ALIGN]]
// CHECK-NOT: !annotation
// CHECK-NEXT: call void @{{.*}}used{{.*}}%braces)
@@ -491,14 +491,14 @@ TEST_UNINIT(longdouble, long double);
// CHECK: %uninit = alloca x86_fp80, align
// CHECK-NEXT: call void @{{.*}}used{{.*}}%uninit)
// PATTERN-LABEL: @test_longdouble_uninit()
-// PATTERN: store x86_fp80 0xKFFFFFFFFFFFFFFFFFFFF, ptr %uninit, align {{.+}}, !annotation [[AUTO_INIT]]
+// PATTERN: store x86_fp80 f0xFFFFFFFFFFFFFFFFFFFF, ptr %uninit, align {{.+}}, !annotation [[AUTO_INIT]]
// ZERO-LABEL: @test_longdouble_uninit()
-// ZERO: store x86_fp80 0xK00000000000000000000, ptr %uninit, align {{.+}}, !annotation [[AUTO_INIT]]
+// ZERO: store x86_fp80 f0x00000000000000000000, ptr %uninit, align {{.+}}, !annotation [[AUTO_INIT]]
TEST_BRACES(longdouble, long double);
// CHECK-LABEL: @test_longdouble_braces()
// CHECK: %braces = alloca x86_fp80, align [[ALIGN:[0-9]*]]
-// CHECK-NEXT: store x86_fp80 0xK00000000000000000000, ptr %braces, align [[ALIGN]]
+// CHECK-NEXT: store x86_fp80 f0x00000000000000000000, ptr %braces, align [[ALIGN]]
// CHECK-NOT: !annotation
// CHECK-NEXT: call void @{{.*}}used{{.*}}%braces)
@@ -1683,7 +1683,7 @@ TEST_UNINIT(longdoublevec32, long double __attribute__((vector_size(sizeof(long
// CHECK: %uninit = alloca <2 x x86_fp80>, align
// CHECK-NEXT: call void @{{.*}}used{{.*}}%uninit)
// PATTERN-LABEL: @test_longdoublevec32_uninit()
-// PATTERN: store <2 x x86_fp80> splat (x86_fp80 0xKFFFFFFFFFFFFFFFFFFFF), ptr %uninit, align 32, !annotation [[AUTO_INIT]]
+// PATTERN: store <2 x x86_fp80> splat (x86_fp80 f0xFFFFFFFFFFFFFFFFFFFF), ptr %uninit, align 32, !annotation [[AUTO_INIT]]
// ZERO-LABEL: @test_longdoublevec32_uninit()
// ZERO: store <2 x x86_fp80> zeroinitializer, ptr %uninit, align 32, !annotation [[AUTO_INIT]]
diff --git a/clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp b/clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp
index c8ab43194350e2..baccf4771c18c0 100644
--- a/clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp
+++ b/clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp
@@ -20,7 +20,7 @@ void f() {
// CHECK: call void @_Zli2_xPKcm({{.*}}, ptr noundef @[[s_bar]], i64 noundef 3)
// CHECK: call void @_Zli2_yw({{.*}} 97)
// CHECK: call void @_Zli2_zy({{.*}} 42)
- // CHECK: call void @_Zli2_fe({{.*}} x86_fp80 noundef 0xK3FFF8000000000000000)
+ // CHECK: call void @_Zli2_fe({{.*}} x86_fp80 noundef f0x3FFF8000000000000000)
// CHECK: call void @_ZN1SD1Ev({{.*}})
// CHECK: call void @_ZN1SD1Ev({{.*}})
// CHECK: call void @_ZN1SD1Ev({{.*}})
diff --git a/clang/test/CodeGenCXX/float128-declarations.cpp b/clang/test/CodeGenCXX/float128-declarations.cpp
index 84b8f7f33036b5..11c95cb25d9ee2 100644
--- a/clang/test/CodeGenCXX/float128-declarations.cpp
+++ b/clang/test/CodeGenCXX/float128-declarations.cpp
@@ -88,46 +88,46 @@ int main(void) {
__float128 f8l = f4l++;
__float128 arr1l[] = { -1.q, -0.q, -11.q };
}
-// CHECK-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global fp128 0xL00000000000000000000000000000000
-// CHECK-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global fp128 0xL00000000000000004004080000000000
+// CHECK-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global fp128 f0x00000000000000000000000000000000
+// CHECK-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global fp128 f0x40040800000000000000000000000000
// CHECK-DAG: @_ZN12_GLOBAL__N_15arr1nE = internal global [10 x fp128]
-// CHECK-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x fp128] [fp128 0xL33333333333333333FFF333333333333, fp128 0xL00000000000000004000800000000000, fp128 0xL00000000000000004025176592E00000]
+// CHECK-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x fp128] [fp128 f0x3FFF3333333333333333333333333333, fp128 f0x40008000000000000000000000000000, fp128 f0x4025176592E000000000000000000000]
// CHECK-DAG: define internal noundef fp128 @_ZN12_GLOBAL__N_16func1nERKu9__ieee128(ptr
-// CHECK-DAG: @f1f ={{.*}} global fp128 0xL00000000000000000000000000000000
-// CHECK-DAG: @f2f ={{.*}} global fp128 0xL33333333333333334004033333333333
+// CHECK-DAG: @f1f ={{.*}} global fp128 f0x00000000000000000000000000000000
+// CHECK-DAG: @f2f ={{.*}} global fp128 f0x40040333333333333333333333333333
// CHECK-DAG: @arr1f ={{.*}} global [10 x fp128]
-// CHECK-DAG: @arr2f ={{.*}} global [3 x fp128] [fp128 0xL3333333333333333BFFF333333333333, fp128 0xL0000000000000000C000800000000000, fp128 0xL0000000000000000C025176592E00000]
+// CHECK-DAG: @arr2f ={{.*}} global [3 x fp128] [fp128 f0xBFFF3333333333333333333333333333, fp128 f0xC0008000000000000000000000000000, fp128 f0xC025176592E000000000000000000000]
// CHECK-DAG: declare noundef fp128 @_Z6func1fu9__ieee128(fp128 noundef)
// CHECK-DAG: define linkonce_odr void @_ZN2C1C2Eu9__ieee128(ptr {{[^,]*}} %this, fp128 noundef %arg)
// CHECK-DAG: define linkonce_odr noundef fp128 @_ZN2C16func2cEu9__ieee128(fp128 noundef %arg)
// CHECK-DAG: define linkonce_odr noundef fp128 @_Z6func1tIu9__ieee128ET_S0_(fp128 noundef %arg)
-// CHECK-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { fp128 0xL00000000000000004006080000000000 }
-// CHECK-DAG: store fp128 0xLF0AFD0EBFF292DCE42E0B38CDD83F26F, ptr %f1l, align 16
-// CHECK-DAG: store fp128 0xL00000000000000008000000000000000, ptr %f2l, align 16
-// CHECK-DAG: store fp128 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF, ptr %f3l, align 16
-// CHECK-DAG: store fp128 0xL0000000000000000BFFF000000000000, ptr %f5l, align 16
+// CHECK-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { fp128 f0x40060800000000000000000000000000 }
+// CHECK-DAG: store fp128 f0x42E0B38CDD83F26FF0AFD0EBFF292DCE, ptr %f1l, align 16
+// CHECK-DAG: store fp128 f0x80000000000000000000000000000000, ptr %f2l, align 16
+// CHECK-DAG: store fp128 f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF, ptr %f3l, align 16
+// CHECK-DAG: store fp128 f0xBFFF0000000000000000000000000000, ptr %f5l, align 16
// CHECK-DAG: [[F4L:%[a-z0-9]+]] = load fp128, ptr %f4l
-// CHECK-DAG: [[INC:%[a-z0-9]+]] = fadd fp128 [[F4L]], 0xL00000000000000003FFF000000000000
+// CHECK-DAG: [[INC:%[a-z0-9]+]] = fadd fp128 [[F4L]], f0x3FFF0000000000000000000000000000
// CHECK-DAG: store fp128 [[INC]], ptr %f4l
-// CHECK-X86-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global fp128 0xL00000000000000000000000000000000
-// CHECK-X86-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global fp128 0xL00000000000000004004080000000000
+// CHECK-X86-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global fp128 f0x00000000000000000000000000000000
+// CHECK-X86-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global fp128 f0x40040800000000000000000000000000
// CHECK-X86-DAG: @_ZN12_GLOBAL__N_15arr1nE = internal global [10 x fp128]
-// CHECK-X86-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x fp128] [fp128 0xL33333333333333333FFF333333333333, fp128 0xL00000000000000004000800000000000, fp128 0xL00000000000000004025176592E00000]
+// CHECK-X86-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x fp128] [fp128 f0x3FFF3333333333333333333333333333, fp128 f0x40008000000000000000000000000000, fp128 f0x4025176592E000000000000000000000]
// CHECK-X86-DAG: define internal noundef fp128 @_ZN12_GLOBAL__N_16func1nERKg(ptr
-// CHECK-X86-DAG: @f1f ={{.*}} global fp128 0xL00000000000000000000000000000000
-// CHECK-X86-DAG: @f2f ={{.*}} global fp128 0xL33333333333333334004033333333333
+// CHECK-X86-DAG: @f1f ={{.*}} global fp128 f0x00000000000000000000000000000000
+// CHECK-X86-DAG: @f2f ={{.*}} global fp128 f0x40040333333333333333333333333333
// CHECK-X86-DAG: @arr1f ={{.*}} global [10 x fp128]
-// CHECK-X86-DAG: @arr2f ={{.*}} global [3 x fp128] [fp128 0xL3333333333333333BFFF333333333333, fp128 0xL0000000000000000C000800000000000, fp128 0xL0000000000000000C025176592E00000]
+// CHECK-X86-DAG: @arr2f ={{.*}} global [3 x fp128] [fp128 f0xBFFF3333333333333333333333333333, fp128 f0xC0008000000000000000000000000000, fp128 f0xC025176592E000000000000000000000]
// CHECK-X86-DAG: declare noundef fp128 @_Z6func1fg(fp128 noundef)
// CHECK-X86-DAG: define linkonce_odr void @_ZN2C1C2Eg(ptr {{[^,]*}} %this, fp128 noundef %arg)
// CHECK-X86-DAG: define linkonce_odr noundef fp128 @_ZN2C16func2cEg(fp128 noundef %arg)
// CHECK-X86-DAG: define linkonce_odr noundef fp128 @_Z6func1tIgET_S0_(fp128 noundef %arg)
-// CHECK-X86-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { fp128 0xL00000000000000004006080000000000 }
-// CHECK-X86-DAG: store fp128 0xLF0AFD0EBFF292DCE42E0B38CDD83F26F, ptr %f1l, align 16
-// CHECK-X86-DAG: store fp128 0xL00000000000000008000000000000000, ptr %f2l, align 16
-// CHECK-X86-DAG: store fp128 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF, ptr %f3l, align 16
-// CHECK-X86-DAG: store fp128 0xL0000000000000000BFFF000000000000, ptr %f5l, align 16
+// CHECK-X86-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { fp128 f0x40060800000000000000000000000000 }
+// CHECK-X86-DAG: store fp128 f0x42E0B38CDD83F26FF0AFD0EBFF292DCE, ptr %f1l, align 16
+// CHECK-X86-DAG: store fp128 f0x80000000000000000000000000000000, ptr %f2l, align 16
+// CHECK-X86-DAG: store fp128 f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF, ptr %f3l, align 16
+// CHECK-X86-DAG: store fp128 f0xBFFF0000000000000000000000000000, ptr %f5l, align 16
// CHECK-X86-DAG: [[F4L:%[a-z0-9]+]] = load fp128, ptr %f4l
-// CHECK-X86-DAG: [[INC:%[a-z0-9]+]] = fadd fp128 [[F4L]], 0xL00000000000000003FFF000000000000
+// CHECK-X86-DAG: [[INC:%[a-z0-9]+]] = fadd fp128 [[F4L]], f0x3FFF0000000000000000000000000000
// CHECK-X86-DAG: store fp128 [[INC]], ptr %f4l
diff --git a/clang/test/CodeGenCXX/float16-declarations.cpp b/clang/test/CodeGenCXX/float16-declarations.cpp
index b395beb263e154..9381dedd4f00da 100644
--- a/clang/test/CodeGenCXX/float16-declarations.cpp
+++ b/clang/test/CodeGenCXX/float16-declarations.cpp
@@ -7,16 +7,16 @@
namespace {
_Float16 f1n;
-// CHECK-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global half 0xH0000, align 2
+// CHECK-DAG: @_ZN12_GLOBAL__N_13f1nE = internal global half f0x0000, align 2
_Float16 f2n = 33.f16;
-// CHECK-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global half 0xH5020, align 2
+// CHECK-DAG: @_ZN12_GLOBAL__N_13f2nE = internal global half f0x5020, align 2
_Float16 arr1n[10];
// CHECK-AARCH64-DAG: @_ZN12_GLOBAL__N_15arr1nE = internal global [10 x half] zeroinitializer, align 2
_Float16 arr2n[] = { 1.2, 3.0, 3.e4 };
-// CHECK-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x half] [half 0xH3CCD, half 0xH4200, half 0xH7753], align 2
+// CHECK-DAG: @_ZN12_GLOBAL__N_15arr2nE = internal global [3 x half] [half f0x3CCD, half f0x4200, half f0x7753], align 2
const volatile _Float16 func1n(const _Float16 &arg) {
return arg + f2n + arr1n[4] - arr2n[1];
@@ -27,16 +27,16 @@ namespace {
/* File */
_Float16 f1f;
-// CHECK-AARCH64-DAG: @f1f = dso_local global half 0xH0000, align 2
+// CHECK-AARCH64-DAG: @f1f = dso_local global half f0x0000, align 2
_Float16 f2f = 32.4;
-// CHECK-DAG: @f2f = dso_local global half 0xH500D, align 2
+// CHECK-DAG: @f2f = dso_local global half f0x500D, align 2
_Float16 arr1f[10];
// CHECK-AARCH64-DAG: @arr1f = dso_local global [10 x half] zeroinitializer, align 2
_Float16 arr2f[] = { -1.2, -3.0, -3.e4 };
-// CHECK-DAG: @arr2f = dso_local global [3 x half] [half 0xHBCCD, half 0xHC200, half 0xHF753], align 2
+// CHECK-DAG: @arr2f = dso_local global [3 x half] [half f0xBCCD, half f0xC200, half f0xF753], align 2
_Float16 func1f(_Float16 arg);
@@ -89,36 +89,36 @@ extern int printf (const char *__restrict __format, ...);
int main(void) {
_Float16 f1l = 1e3f16;
-// CHECK-DAG: store half 0xH63D0, ptr %{{.*}}, align 2
+// CHECK-DAG: store half f0x63D0, ptr %{{.*}}, align 2
_Float16 f2l = -0.f16;
-// CHECK-DAG: store half 0xH8000, ptr %{{.*}}, align 2
+// CHECK-DAG: store half f0x8000, ptr %{{.*}}, align 2
_Float16 f3l = 1.000976562;
-// CHECK-DAG: store half 0xH3C01, ptr %{{.*}}, align 2
+// CHECK-DAG: store half f0x3C01, ptr %{{.*}}, align 2
C1 c1(f1l);
// CHECK-DAG: [[F1L:%[a-z0-9]+]] = load half, ptr %{{.*}}, align 2
// CHECK-DAG: call void @_ZN2C1C2EDF16_(ptr {{[^,]*}} %{{.*}}, half noundef %{{.*}})
S1<_Float16> s1 = { 132.f16 };
-// CHECK-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { half 0xH5820 }, align 2
+// CHECK-DAG: @__const.main.s1 = private unnamed_addr constant %struct.S1 { half f0x5820 }, align 2
// CHECK-DAG: call void @llvm.memcpy.p0.p0.i64(ptr align 2 %{{.*}}, ptr align 2 @__const.main.s1, i64 2, i1 false)
_Float16 f4l = func1n(f1l) + func1f(f2l) + c1.func1c(f3l) + c1.func2c(f1l) +
func1t(f1l) + s1.mem2 - f1n + f2n;
auto f5l = -1.f16, *f6l = &f2l, f7l = func1t(f3l);
-// CHECK-DAG: store half 0xHBC00, ptr %{{.*}}, align 2
+// CHECK-DAG: store half f0xBC00, ptr %{{.*}}, align 2
// CHECK-DAG: store ptr %{{.*}}, ptr %{{.*}}, align 8
_Float16 f8l = f4l++;
// CHECK-DAG: %{{.*}} = load half, ptr %{{.*}}, align 2
-// CHECK-DAG: [[INC:%[a-z0-9]+]] = fadd half {{.*}}, 0xH3C00
+// CHECK-DAG: [[INC:%[a-z0-9]+]] = fadd half {{.*}}, f0x3C00
// CHECK-DAG: store half [[INC]], ptr %{{.*}}, align 2
_Float16 arr1l[] = { -1.f16, -0.f16, -11.f16 };
-// CHECK-DAG: @__const.main.arr1l = private unnamed_addr constant [3 x half] [half 0xHBC00, half 0xH8000, half 0xHC980], align 2
+// CHECK-DAG: @__const.main.arr1l = private unnamed_addr constant [3 x half] [half f0xBC00, half f0x8000, half f0xC980], align 2
float cvtf = f2n;
//CHECK-DAG: [[H2F:%[a-z0-9]+]] = fpext half {{%[0-9]+}} to float
@@ -134,9 +134,9 @@ int main(void) {
//CHECK-AARCh64-DAG: store fp128 [[H2LD]], ptr %{{.*}}, align 16
_Float16 f2h = 42.0f;
-//CHECK-DAG: store half 0xH5140, ptr %{{.*}}, align 2
+//CHECK-DAG: store half f0x5140, ptr %{{.*}}, align 2
_Float16 d2h = 42.0;
-//CHECK-DAG: store half 0xH5140, ptr %{{.*}}, align 2
+//CHECK-DAG: store half f0x5140, ptr %{{.*}}, align 2
_Float16 ld2h = 42.0l;
-//CHECK-DAG:store half 0xH5140, ptr %{{.*}}, align 2
+//CHECK-DAG:store half f0x5140, ptr %{{.*}}, align 2
}
diff --git a/clang/test/CodeGenCXX/ibm128-declarations.cpp b/clang/test/CodeGenCXX/ibm128-declarations.cpp
index 61ff6fff2d0a74..d5e3a5eeeaa4b6 100644
--- a/clang/test/CodeGenCXX/ibm128-declarations.cpp
+++ b/clang/test/CodeGenCXX/ibm128-declarations.cpp
@@ -76,7 +76,7 @@ int main(void) {
// CHECK: %struct.T1 = type { ppc_fp128 }
// CHECK: @arrgf = global [10 x ppc_fp128] zeroinitializer, align 16
-// CHECK: @gf = global ppc_fp128 0xM40080000000000000000000000000000, align 16
+// CHECK: @gf = global ppc_fp128 f0x00000000000000004008000000000000, align 16
// CHECK: @_ZN5CTest3scfE = external constant ppc_fp128, align 16
// CHECK: define dso_local noundef ppc_fp128 @_Z10func_arithggg(ppc_fp128 noundef %a, ppc_fp128 noundef %b, ppc_fp128 noundef %c)
diff --git a/clang/test/CodeGenHLSL/builtins/rcp.hlsl b/clang/test/CodeGenHLSL/builtins/rcp.hlsl
index 83fe33406c7c89..5255c50055f08e 100644
--- a/clang/test/CodeGenHLSL/builtins/rcp.hlsl
+++ b/clang/test/CodeGenHLSL/builtins/rcp.hlsl
@@ -15,7 +15,7 @@
// DXIL_NATIVE_HALF: define noundef half @
// SPIR_NATIVE_HALF: define spir_func noundef half @
-// NATIVE_HALF: %hlsl.rcp = fdiv half 0xH3C00, %{{.*}}
+// NATIVE_HALF: %hlsl.rcp = fdiv half f0x3C00, %{{.*}}
// NATIVE_HALF: ret half %hlsl.rcp
// DXIL_NO_HALF: define noundef float @
// SPIR_NO_HALF: define spir_func noundef float @
@@ -25,7 +25,7 @@ half test_rcp_half(half p0) { return rcp(p0); }
// DXIL_NATIVE_HALF: define noundef <2 x half> @
// SPIR_NATIVE_HALF: define spir_func noundef <2 x half> @
-// NATIVE_HALF: %hlsl.rcp = fdiv <2 x half> splat (half 0xH3C00), %{{.*}}
+// NATIVE_HALF: %hlsl.rcp = fdiv <2 x half> splat (half f0x3C00), %{{.*}}
// NATIVE_HALF: ret <2 x half> %hlsl.rcp
// DXIL_NO_HALF: define noundef <2 x float> @
// SPIR_NO_HALF: define spir_func noundef <2 x float> @
@@ -35,7 +35,7 @@ half2 test_rcp_half2(half2 p0) { return rcp(p0); }
// DXIL_NATIVE_HALF: define noundef <3 x half> @
// SPIR_NATIVE_HALF: define spir_func noundef <3 x half> @
-// NATIVE_HALF: %hlsl.rcp = fdiv <3 x half> splat (half 0xH3C00), %{{.*}}
+// NATIVE_HALF: %hlsl.rcp = fdiv <3 x half> splat (half f0x3C00), %{{.*}}
// NATIVE_HALF: ret <3 x half> %hlsl.rcp
// DXIL_NO_HALF: define noundef <3 x float> @
// SPIR_NO_HALF: define spir_func noundef <3 x float> @
@@ -45,7 +45,7 @@ half3 test_rcp_half3(half3 p0) { return rcp(p0); }
// DXIL_NATIVE_HALF: define noundef <4 x half> @
// SPIR_NATIVE_HALF: define spir_func noundef <4 x half> @
-// NATIVE_HALF: %hlsl.rcp = fdiv <4 x half> splat (half 0xH3C00), %{{.*}}
+// NATIVE_HALF: %hlsl.rcp = fdiv <4 x half> splat (half f0x3C00), %{{.*}}
// NATIVE_HALF: ret <4 x half> %hlsl.rcp
// DXIL_NO_HALF: define noundef <4 x float> @
// SPIR_NO_HALF: define spir_func noundef <4 x float> @
diff --git a/clang/test/CodeGenOpenCL/amdgpu-alignment.cl b/clang/test/CodeGenOpenCL/amdgpu-alignment.cl
index 8f57713fe1f041..8a73dfaef302f1 100644
--- a/clang/test/CodeGenOpenCL/amdgpu-alignment.cl
+++ b/clang/test/CodeGenOpenCL/amdgpu-alignment.cl
@@ -116,9 +116,9 @@ typedef double __attribute__((ext_vector_type(16))) double16;
// CHECK: store volatile <4 x i64> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v4i64, align 32
// CHECK: store volatile <8 x i64> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v8i64, align 64
// CHECK: store volatile <16 x i64> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v16i64, align 128
-// CHECK: store volatile half 0xH0000, ptr addrspace(3) @local_memory_alignment_global.lds_f16, align 2
+// CHECK: store volatile half f0x0000, ptr addrspace(3) @local_memory_alignment_global.lds_f16, align 2
// CHECK: store volatile <2 x half> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v2f16, align 4
-// CHECK: store volatile <4 x half> <half 0xH0000, half 0xH0000, half 0xH0000, half undef>, ptr addrspace(3) @local_memory_alignment_global.lds_v3f16, align 8
+// CHECK: store volatile <4 x half> <half f0x0000, half f0x0000, half f0x0000, half undef>, ptr addrspace(3) @local_memory_alignment_global.lds_v3f16, align 8
// CHECK: store volatile <4 x half> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v4f16, align 8
// CHECK: store volatile <8 x half> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v8f16, align 16
// CHECK: store volatile <16 x half> zeroinitializer, ptr addrspace(3) @local_memory_alignment_global.lds_v16f16, align 32
@@ -403,9 +403,9 @@ kernel void local_memory_alignment_arg(
// CHECK: store volatile <4 x i64> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 32
// CHECK: store volatile <8 x i64> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 64
// CHECK: store volatile <16 x i64> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 128
-// CHECK: store volatile half 0xH0000, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 2
+// CHECK: store volatile half f0x0000, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 2
// CHECK: store volatile <2 x half> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 4
-// CHECK: store volatile <4 x half> <half 0xH0000, half 0xH0000, half 0xH0000, half undef>, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 8
+// CHECK: store volatile <4 x half> <half f0x0000, half f0x0000, half f0x0000, half undef>, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 8
// CHECK: store volatile <4 x half> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 8
// CHECK: store volatile <8 x half> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 16
// CHECK: store volatile <16 x half> zeroinitializer, ptr addrspace(5) %arraydecay{{[0-9]+}}, align 32
diff --git a/clang/test/CodeGenOpenCL/half.cl b/clang/test/CodeGenOpenCL/half.cl
index 6ade7e691aa93a..cb831f214ef009 100644
--- a/clang/test/CodeGenOpenCL/half.cl
+++ b/clang/test/CodeGenOpenCL/half.cl
@@ -12,11 +12,11 @@ half test()
half y = x + x;
half z = y * 1.0f;
return z;
-// CHECK: half 0xH3260
+// CHECK: half f0x3260
}
// CHECK-LABEL: @test_inc(half noundef %x)
-// CHECK: [[INC:%.*]] = fadd half %x, 0xH3C00
+// CHECK: [[INC:%.*]] = fadd half %x, f0x3C00
// CHECK: ret half [[INC]]
half test_inc(half x)
{
@@ -30,12 +30,12 @@ __attribute__((overloadable)) float min(float, float);
__kernel void foo( __global half* buf, __global float* buf2 )
{
buf[0] = min( buf[0], 1.5h );
-// CHECK: half noundef 0xH3E00
+// CHECK: half noundef f0x3E00
buf[0] = min( buf2[0], 1.5f );
// CHECK: float noundef 1.500000e+00
const half one = 1.6666;
buf[1] = min( buf[1], one );
-// CHECK: half noundef 0xH3EAB
+// CHECK: half noundef f0x3EAB
}
diff --git a/clang/test/Frontend/fixed_point_conversions_half.c b/clang/test/Frontend/fixed_point_conversions_half.c
index 38b99123b867f1..116ec5ff1d2e31 100644
--- a/clang/test/Frontend/fixed_point_conversions_half.c
+++ b/clang/test/Frontend/fixed_point_conversions_half.c
@@ -26,7 +26,7 @@ _Float16 h;
// CHECK-LABEL: @half_fix1(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// CHECK-NEXT: [[TMP2:%.*]] = fptosi half [[TMP1]] to i8
// CHECK-NEXT: store i8 [[TMP2]], ptr @sf, align 1
// CHECK-NEXT: ret void
@@ -51,7 +51,7 @@ void half_fix2(void) {
// CHECK-LABEL: @half_fix3(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// CHECK-NEXT: [[TMP2:%.*]] = fptosi half [[TMP1]] to i16
// CHECK-NEXT: store i16 [[TMP2]], ptr @sa, align 2
// CHECK-NEXT: ret void
@@ -85,7 +85,7 @@ void half_fix4(void) {
// UNSIGNED-LABEL: @half_fix5(
// UNSIGNED-NEXT: entry:
// UNSIGNED-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// UNSIGNED-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// UNSIGNED-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// UNSIGNED-NEXT: [[TMP2:%.*]] = fptosi half [[TMP1]] to i16
// UNSIGNED-NEXT: store i16 [[TMP2]], ptr @usa, align 2
// UNSIGNED-NEXT: ret void
@@ -120,7 +120,7 @@ void half_fix6(void) {
// CHECK-LABEL: @half_sat1(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// CHECK-NEXT: [[TMP2:%.*]] = call i8 @llvm.fptosi.sat.i8.f16(half [[TMP1]])
// CHECK-NEXT: store i8 [[TMP2]], ptr @sf_sat, align 1
// CHECK-NEXT: ret void
@@ -145,7 +145,7 @@ void half_sat2(void) {
// CHECK-LABEL: @half_sat3(
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// CHECK-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// CHECK-NEXT: [[TMP2:%.*]] = call i16 @llvm.fptosi.sat.i16.f16(half [[TMP1]])
// CHECK-NEXT: store i16 [[TMP2]], ptr @sa_sat, align 2
// CHECK-NEXT: ret void
@@ -179,7 +179,7 @@ void half_sat4(void) {
// UNSIGNED-LABEL: @half_sat5(
// UNSIGNED-NEXT: entry:
// UNSIGNED-NEXT: [[TMP0:%.*]] = load half, ptr @h, align 2
-// UNSIGNED-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], 0xH5800
+// UNSIGNED-NEXT: [[TMP1:%.*]] = fmul half [[TMP0]], f0x5800
// UNSIGNED-NEXT: [[TMP2:%.*]] = call i16 @llvm.fptosi.sat.i16.f16(half [[TMP1]])
// UNSIGNED-NEXT: [[TMP3:%.*]] = icmp slt i16 [[TMP2]], 0
// UNSIGNED-NEXT: [[SATMIN:%.*]] = select i1 [[TMP3]], i16 0, i16 [[TMP2]]
@@ -219,7 +219,7 @@ void half_sat6(void) {
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load i8, ptr @sf, align 1
// CHECK-NEXT: [[TMP1:%.*]] = sitofp i8 [[TMP0]] to half
-// CHECK-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], 0xH2000
+// CHECK-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], f0x2000
// CHECK-NEXT: store half [[TMP2]], ptr @h, align 2
// CHECK-NEXT: ret void
//
@@ -244,7 +244,7 @@ void fix_half2(void) {
// CHECK-NEXT: entry:
// CHECK-NEXT: [[TMP0:%.*]] = load i16, ptr @sa, align 2
// CHECK-NEXT: [[TMP1:%.*]] = sitofp i16 [[TMP0]] to half
-// CHECK-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], 0xH2000
+// CHECK-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], f0x2000
// CHECK-NEXT: store half [[TMP2]], ptr @h, align 2
// CHECK-NEXT: ret void
//
@@ -278,7 +278,7 @@ void fix_half4(void) {
// UNSIGNED-NEXT: entry:
// UNSIGNED-NEXT: [[TMP0:%.*]] = load i16, ptr @usa, align 2
// UNSIGNED-NEXT: [[TMP1:%.*]] = uitofp i16 [[TMP0]] to half
-// UNSIGNED-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], 0xH2000
+// UNSIGNED-NEXT: [[TMP2:%.*]] = fmul half [[TMP1]], f0x2000
// UNSIGNED-NEXT: store half [[TMP2]], ptr @h, align 2
// UNSIGNED-NEXT: ret void
//
diff --git a/clang/test/Headers/__clang_hip_math_deprecated.hip b/clang/test/Headers/__clang_hip_math_deprecated.hip
index caba3e9ad83d18..5beb20b148b205 100644
--- a/clang/test/Headers/__clang_hip_math_deprecated.hip
+++ b/clang/test/Headers/__clang_hip_math_deprecated.hip
@@ -12,7 +12,7 @@
// CHECK-LABEL: @test_rcpf16_wrapper(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[DIV_I:%.*]] = fdiv contract half 0xH3C00, [[X:%.*]]
+// CHECK-NEXT: [[DIV_I:%.*]] = fdiv contract half f0x3C00, [[X:%.*]]
// CHECK-NEXT: ret half [[DIV_I]]
//
extern "C" __device__ _Float16 test_rcpf16_wrapper(_Float16 x) {
@@ -21,7 +21,7 @@ extern "C" __device__ _Float16 test_rcpf16_wrapper(_Float16 x) {
// CHECK-LABEL: @test_rcp2f16_wrapper(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[DIV_I:%.*]] = fdiv contract <2 x half> splat (half 0xH3C00), [[X:%.*]]
+// CHECK-NEXT: [[DIV_I:%.*]] = fdiv contract <2 x half> splat (half f0x3C00), [[X:%.*]]
// CHECK-NEXT: ret <2 x half> [[DIV_I]]
//
extern "C" __device__ __2f16 test_rcp2f16_wrapper(__2f16 x) {
diff --git a/clang/test/OpenMP/atomic_capture_codegen.cpp b/clang/test/OpenMP/atomic_capture_codegen.cpp
index 7535aed26f7d56..7033c6a205c943 100644
--- a/clang/test/OpenMP/atomic_capture_codegen.cpp
+++ b/clang/test/OpenMP/atomic_capture_codegen.cpp
@@ -553,7 +553,7 @@ int main(void) {
// CHECK: [[CONV:%.+]] = zext i1 [[BOOL_EXPECTED]] to i32
// CHECK: [[X_RVAL:%.+]] = sitofp i32 [[CONV]] to x86_fp80
// CHECK: [[MUL:%.+]] = fmul x86_fp80 [[EXPR]], [[X_RVAL]]
-// CHECK: [[BOOL_DESIRED:%.+]] = fcmp une x86_fp80 [[MUL]], 0xK00000000000000000000
+// CHECK: [[BOOL_DESIRED:%.+]] = fcmp une x86_fp80 [[MUL]], f0x00000000000000000000
// CHECK: [[DESIRED:%.+]] = zext i1 [[BOOL_DESIRED]] to i8
// CHECK: store i8 [[DESIRED]], ptr [[TEMP:%.+]],
// CHECK: [[DESIRED:%.+]] = load i8, ptr [[TEMP]],
diff --git a/clang/test/OpenMP/atomic_update_codegen.cpp b/clang/test/OpenMP/atomic_update_codegen.cpp
index d91dc79818c281..ba3e3cb4052e48 100644
--- a/clang/test/OpenMP/atomic_update_codegen.cpp
+++ b/clang/test/OpenMP/atomic_update_codegen.cpp
@@ -491,7 +491,7 @@ int main(void) {
// CHECK: [[CONV:%.+]] = zext i1 [[BOOL_EXPECTED]] to i32
// CHECK: [[X_RVAL:%.+]] = sitofp i32 [[CONV]] to x86_fp80
// CHECK: [[MUL:%.+]] = fmul x86_fp80 [[EXPR]], [[X_RVAL]]
-// CHECK: [[BOOL_DESIRED:%.+]] = fcmp une x86_fp80 [[MUL]], 0xK00000000000000000000
+// CHECK: [[BOOL_DESIRED:%.+]] = fcmp une x86_fp80 [[MUL]], f0x00000000000000000000
// CHECK: [[DESIRED:%.+]] = zext i1 [[BOOL_DESIRED]] to i8
// CHECK: store i8 [[DESIRED]], ptr [[TEMP:%.+]]
// CHECK: [[DESIRED:%.+]] = load i8, ptr [[TEMP]]
diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp
index a37a8901489cf7..1803c37b032839 100644
--- a/llvm/lib/IR/AsmWriter.cpp
+++ b/llvm/lib/IR/AsmWriter.cpp
@@ -1501,32 +1501,27 @@ static void WriteAPFloatInternal(raw_ostream &Out, const APFloat &APF) {
// Either half, bfloat or some form of long double.
// These appear as a magic letter identifying the type, then a
// fixed number of hex digits.
- Out << "0x";
+ Out << "f0x";
APInt API = APF.bitcastToAPInt();
if (&APF.getSemantics() == &APFloat::x87DoubleExtended()) {
- Out << 'K';
Out << format_hex_no_prefix(API.getHiBits(16).getZExtValue(), 4,
/*Upper=*/true);
Out << format_hex_no_prefix(API.getLoBits(64).getZExtValue(), 16,
/*Upper=*/true);
} else if (&APF.getSemantics() == &APFloat::IEEEquad()) {
- Out << 'L';
- Out << format_hex_no_prefix(API.getLoBits(64).getZExtValue(), 16,
- /*Upper=*/true);
Out << format_hex_no_prefix(API.getHiBits(64).getZExtValue(), 16,
/*Upper=*/true);
- } else if (&APF.getSemantics() == &APFloat::PPCDoubleDouble()) {
- Out << 'M';
Out << format_hex_no_prefix(API.getLoBits(64).getZExtValue(), 16,
/*Upper=*/true);
+ } else if (&APF.getSemantics() == &APFloat::PPCDoubleDouble()) {
Out << format_hex_no_prefix(API.getHiBits(64).getZExtValue(), 16,
/*Upper=*/true);
+ Out << format_hex_no_prefix(API.getLoBits(64).getZExtValue(), 16,
+ /*Upper=*/true);
} else if (&APF.getSemantics() == &APFloat::IEEEhalf()) {
- Out << 'H';
Out << format_hex_no_prefix(API.getZExtValue(), 4,
/*Upper=*/true);
} else if (&APF.getSemantics() == &APFloat::BFloat()) {
- Out << 'R';
Out << format_hex_no_prefix(API.getZExtValue(), 4,
/*Upper=*/true);
} else
diff --git a/llvm/test/Analysis/CostModel/AArch64/arith-fp.ll b/llvm/test/Analysis/CostModel/AArch64/arith-fp.ll
index b329a5607acb97..9f4e41e684231f 100644
--- a/llvm/test/Analysis/CostModel/AArch64/arith-fp.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/arith-fp.ll
@@ -69,9 +69,9 @@ define i32 @fsub(i32 %arg) {
define i32 @fneg_idiom(i32 %arg) {
; CHECK-LABEL: 'fneg_idiom'
-; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %F16 = fsub half 0xH8000, undef
-; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V4F16 = fsub <4 x half> splat (half 0xH8000), undef
-; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V8F16 = fsub <8 x half> splat (half 0xH8000), undef
+; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %F16 = fsub half f0x8000, undef
+; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V4F16 = fsub <4 x half> splat (half f0x8000), undef
+; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V8F16 = fsub <8 x half> splat (half f0x8000), undef
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %F32 = fsub float -0.000000e+00, undef
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V2F32 = fsub <2 x float> splat (float -0.000000e+00), undef
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V4F32 = fsub <4 x float> splat (float -0.000000e+00), undef
diff --git a/llvm/test/Analysis/CostModel/AArch64/insert-extract.ll b/llvm/test/Analysis/CostModel/AArch64/insert-extract.ll
index babc1c42c74385..22d47663edca51 100644
--- a/llvm/test/Analysis/CostModel/AArch64/insert-extract.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/insert-extract.ll
@@ -37,8 +37,8 @@ define void @vectorInstrCost() {
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t80 = insertelement <2 x i32> undef, i32 5, i32 1
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t90 = insertelement <2 x i64> undef, i64 6, i32 0
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t100 = insertelement <2 x i64> undef, i64 7, i32 1
-; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t110 = insertelement <4 x half> zeroinitializer, half 0xH0000, i64 0
-; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t120 = insertelement <4 x half> zeroinitializer, half 0xH0000, i64 1
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t110 = insertelement <4 x half> zeroinitializer, half f0x0000, i64 0
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t120 = insertelement <4 x half> zeroinitializer, half f0x0000, i64 1
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t130 = insertelement <2 x float> zeroinitializer, float 0.000000e+00, i64 0
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t140 = insertelement <2 x float> zeroinitializer, float 0.000000e+00, i64 1
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t150 = insertelement <2 x double> zeroinitializer, double 0.000000e+00, i64 0
@@ -72,8 +72,8 @@ define void @vectorInstrCost() {
; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t80 = insertelement <2 x i32> undef, i32 5, i32 1
; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t90 = insertelement <2 x i64> undef, i64 6, i32 0
; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t100 = insertelement <2 x i64> undef, i64 7, i32 1
-; KRYO-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t110 = insertelement <4 x half> zeroinitializer, half 0xH0000, i64 0
-; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t120 = insertelement <4 x half> zeroinitializer, half 0xH0000, i64 1
+; KRYO-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t110 = insertelement <4 x half> zeroinitializer, half f0x0000, i64 0
+; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t120 = insertelement <4 x half> zeroinitializer, half f0x0000, i64 1
; KRYO-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t130 = insertelement <2 x float> zeroinitializer, float 0.000000e+00, i64 0
; KRYO-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %t140 = insertelement <2 x float> zeroinitializer, float 0.000000e+00, i64 1
; KRYO-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %t150 = insertelement <2 x double> zeroinitializer, double 0.000000e+00, i64 0
diff --git a/llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll b/llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll
index a95542f6901733..0eb02a5df4a2ac 100644
--- a/llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll
@@ -7,44 +7,44 @@ target datalayout = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128"
define void @strict_fp_reductions() {
; CHECK-LABEL: 'strict_fp_reductions'
-; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 76 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 76 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f64 = call double @llvm.vector.reduce.fadd.v2f64(double 0.000000e+00, <2 x double> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f64 = call double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fadd_v4f128 = call fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; FP16-LABEL: 'strict_fp_reductions'
-; FP16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 60 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 60 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f64 = call double @llvm.vector.reduce.fadd.v2f64(double 0.000000e+00, <2 x double> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f64 = call double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fadd_v4f128 = call fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; BF16-LABEL: 'strict_fp_reductions'
-; BF16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 76 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 76 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v2f64 = call double @llvm.vector.reduce.fadd.v2f64(double 0.000000e+00, <2 x double> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f64 = call double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %fadd_v4f8 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fadd_v4f128 = call fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
@@ -66,16 +66,16 @@ define void @strict_fp_reductions() {
define void @fast_fp_reductions() {
; CHECK-LABEL: 'fast_fp_reductions'
-; CHECK-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half 0xH0000, <11 x half> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half 0xH0000, <13 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half f0x0000, <11 x half> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half f0x0000, <13 x half> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32 = call fast float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32_reassoc = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f32 = call fast float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
@@ -90,21 +90,21 @@ define void @fast_fp_reductions() {
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v7f64 = call fast double @llvm.vector.reduce.fadd.v7f64(double 0.000000e+00, <7 x double> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v9f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v9f64(double 0.000000e+00, <9 x double> undef)
-; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR8000, <4 x bfloat> undef)
+; CHECK-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x8000, <4 x bfloat> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f128 = call reassoc fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; FP16-LABEL: 'fast_fp_reductions'
-; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half 0xH0000, <11 x half> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half 0xH0000, <13 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half f0x0000, <11 x half> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half f0x0000, <13 x half> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32 = call fast float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32_reassoc = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f32 = call fast float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
@@ -119,21 +119,21 @@ define void @fast_fp_reductions() {
; FP16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v7f64 = call fast double @llvm.vector.reduce.fadd.v7f64(double 0.000000e+00, <7 x double> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v9f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v9f64(double 0.000000e+00, <9 x double> undef)
-; FP16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR8000, <4 x bfloat> undef)
+; FP16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x8000, <4 x bfloat> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f128 = call reassoc fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; FP16-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; BF16-LABEL: 'fast_fp_reductions'
-; BF16-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half 0xH0000, <11 x half> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half 0xH0000, <13 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_fast = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %fadd_v2f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_fast = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v4f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 30 for instruction: %fadd_v8f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %fadd_v16f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v11f16 = call fast half @llvm.vector.reduce.fadd.v11f16(half f0x0000, <11 x half> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 38 for instruction: %fadd_v13f16_reassoc = call reassoc half @llvm.vector.reduce.fadd.v13f16(half f0x0000, <13 x half> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32 = call fast float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %fadd_v2f32_reassoc = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f32 = call fast float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
@@ -148,7 +148,7 @@ define void @fast_fp_reductions() {
; BF16-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v4f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v4f64(double 0.000000e+00, <4 x double> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v7f64 = call fast double @llvm.vector.reduce.fadd.v7f64(double 0.000000e+00, <7 x double> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v9f64_reassoc = call reassoc double @llvm.vector.reduce.fadd.v9f64(double 0.000000e+00, <9 x double> undef)
-; BF16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR8000, <4 x bfloat> undef)
+; BF16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f8 = call reassoc bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x8000, <4 x bfloat> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v4f128 = call reassoc fp128 @llvm.vector.reduce.fadd.v4f128(fp128 undef, <4 x fp128> undef)
; BF16-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
diff --git a/llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll b/llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll
index ab9d7f9dc859de..a83cc8e277fe96 100644
--- a/llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll
+++ b/llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll
@@ -313,11 +313,11 @@ define amdgpu_kernel void @fdiv_f16_f32ftzdaz() #1 {
define amdgpu_kernel void @rcp_ieee() #0 {
; CIFASTF64-LABEL: 'rcp_ieee'
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half f0x3C00, undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 42 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -331,11 +331,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; CISLOWF64-LABEL: 'rcp_ieee'
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half f0x3C00, undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 42 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -349,11 +349,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; SIFASTF64-LABEL: 'rcp_ieee'
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half f0x3C00, undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 42 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -367,11 +367,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; SISLOWF64-LABEL: 'rcp_ieee'
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f16 = fdiv half f0x3C00, undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 56 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 112 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 42 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -385,11 +385,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; FP16-LABEL: 'rcp_ieee'
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; FP16-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %f32 = fdiv float 1.000000e+00, undef
; FP16-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; FP16-NEXT: Cost Model: Found an estimated cost of 42 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -403,11 +403,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; FP16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; CI-SIZE-LABEL: 'rcp_ieee'
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 96 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f16 = fdiv half f0x3C00, undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 96 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 36 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -421,11 +421,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; SI-SIZE-LABEL: 'rcp_ieee'
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 96 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f16 = fdiv half f0x3C00, undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 96 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 36 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -439,11 +439,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; FP16-SIZE-LABEL: 'rcp_ieee'
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half 0xH3C00, undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half f0x3C00, undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %f32 = fdiv float 1.000000e+00, undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 36 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -476,11 +476,11 @@ define amdgpu_kernel void @rcp_ieee() #0 {
define amdgpu_kernel void @rcp_ftzdaz() #1 {
; CIFASTF64-LABEL: 'rcp_ftzdaz'
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -494,11 +494,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; CIFASTF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; CISLOWF64-LABEL: 'rcp_ftzdaz'
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -512,11 +512,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; CISLOWF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; SIFASTF64-LABEL: 'rcp_ftzdaz'
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -530,11 +530,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; SIFASTF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; SISLOWF64-LABEL: 'rcp_ftzdaz'
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -548,11 +548,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; SISLOWF64-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; FP16-LABEL: 'rcp_ftzdaz'
-; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half 0xH3C00, undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; FP16-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f16 = fdiv half f0x3C00, undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; FP16-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; FP16-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %f32 = fdiv float 1.000000e+00, undef
; FP16-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; FP16-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -566,11 +566,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; FP16-NEXT: Cost Model: Found an estimated cost of 10 for instruction: ret void
;
; CI-SIZE-LABEL: 'rcp_ftzdaz'
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half 0xH3C00, undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half f0x3C00, undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f32 = fdiv float 1.000000e+00, undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -584,11 +584,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; CI-SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; SI-SIZE-LABEL: 'rcp_ftzdaz'
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half 0xH3C00, undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half f0x3C00, undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f32 = fdiv float 1.000000e+00, undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
@@ -602,11 +602,11 @@ define amdgpu_kernel void @rcp_ftzdaz() #1 {
; SI-SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; FP16-SIZE-LABEL: 'rcp_ftzdaz'
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half 0xH3C00, undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half 0xH3C00), undef
-; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half 0xH3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f16 = fdiv half f0x3C00, undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f16 = fdiv <2 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v3f16 = fdiv <3 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %v4f16 = fdiv <4 x half> splat (half f0x3C00), undef
+; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %v5f16 = fdiv <5 x half> splat (half f0x3C00), undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %f32 = fdiv float 1.000000e+00, undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %v2f32 = fdiv <2 x float> splat (float 1.000000e+00), undef
; FP16-SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %v3f32 = fdiv <3 x float> splat (float 1.000000e+00), undef
diff --git a/llvm/test/Analysis/CostModel/ARM/divrem.ll b/llvm/test/Analysis/CostModel/ARM/divrem.ll
index 9f0c29c8bb0c65..8893937b642565 100644
--- a/llvm/test/Analysis/CostModel/ARM/divrem.ll
+++ b/llvm/test/Analysis/CostModel/ARM/divrem.ll
@@ -279,36 +279,36 @@ define void @f16() {
; CHECK-NEON-LABEL: 'f16'
; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv half undef, undef
; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem half undef, undef
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %3 = fdiv half undef, 0xH4000
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %4 = frem half undef, 0xH4000
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %3 = fdiv half undef, f0x4000
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %4 = frem half undef, f0x4000
; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVE-LABEL: 'f16'
; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %1 = fdiv half undef, undef
; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %2 = frem half undef, undef
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, 0xH4000
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, 0xH4000
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, f0x4000
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, f0x4000
; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-V8M-MAIN-LABEL: 'f16'
; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %1 = fdiv half undef, undef
; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %2 = frem half undef, undef
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, 0xH4000
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, 0xH4000
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, f0x4000
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, f0x4000
; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; CHECK-V8M-BASE-LABEL: 'f16'
; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %1 = fdiv half undef, undef
; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %2 = frem half undef, undef
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, 0xH4000
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, 0xH4000
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = fdiv half undef, f0x4000
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = frem half undef, f0x4000
; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; CHECK-V8R-LABEL: 'f16'
; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv half undef, undef
; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem half undef, undef
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %3 = fdiv half undef, 0xH4000
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %4 = frem half undef, 0xH4000
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %3 = fdiv half undef, f0x4000
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %4 = frem half undef, f0x4000
; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
%1 = fdiv half undef, undef
@@ -1491,48 +1491,48 @@ define void @vi64_2() {
define void @vf16_2() {
; CHECK-NEON-LABEL: 'vf16_2'
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half 0xH4000)
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %2 = frem <2 x half> undef, splat (half 0xH4000)
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half 0xH4000)
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %4 = frem <4 x half> undef, splat (half 0xH4000)
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half 0xH4000)
-; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %6 = frem <8 x half> undef, splat (half 0xH4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half f0x4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %2 = frem <2 x half> undef, splat (half f0x4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half f0x4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %4 = frem <4 x half> undef, splat (half f0x4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half f0x4000)
+; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %6 = frem <8 x half> undef, splat (half f0x4000)
; CHECK-NEON-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVE-LABEL: 'vf16_2'
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half 0xH4000)
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %2 = frem <2 x half> undef, splat (half 0xH4000)
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half 0xH4000)
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %4 = frem <4 x half> undef, splat (half 0xH4000)
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half 0xH4000)
-; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %6 = frem <8 x half> undef, splat (half 0xH4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half f0x4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %2 = frem <2 x half> undef, splat (half f0x4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half f0x4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %4 = frem <4 x half> undef, splat (half f0x4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half f0x4000)
+; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %6 = frem <8 x half> undef, splat (half f0x4000)
; CHECK-MVE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-V8M-MAIN-LABEL: 'vf16_2'
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %3 = fdiv <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %4 = frem <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %5 = fdiv <8 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %6 = frem <8 x half> undef, splat (half 0xH4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv <2 x half> undef, splat (half f0x4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem <2 x half> undef, splat (half f0x4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %3 = fdiv <4 x half> undef, splat (half f0x4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %4 = frem <4 x half> undef, splat (half f0x4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %5 = fdiv <8 x half> undef, splat (half f0x4000)
+; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %6 = frem <8 x half> undef, splat (half f0x4000)
; CHECK-V8M-MAIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; CHECK-V8M-BASE-LABEL: 'vf16_2'
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %3 = fdiv <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %4 = frem <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %5 = fdiv <8 x half> undef, splat (half 0xH4000)
-; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %6 = frem <8 x half> undef, splat (half 0xH4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %1 = fdiv <2 x half> undef, splat (half f0x4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %2 = frem <2 x half> undef, splat (half f0x4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %3 = fdiv <4 x half> undef, splat (half f0x4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %4 = frem <4 x half> undef, splat (half f0x4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %5 = fdiv <8 x half> undef, splat (half f0x4000)
+; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %6 = frem <8 x half> undef, splat (half f0x4000)
; CHECK-V8M-BASE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
; CHECK-V8R-LABEL: 'vf16_2'
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %2 = frem <2 x half> undef, splat (half 0xH4000)
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %4 = frem <4 x half> undef, splat (half 0xH4000)
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half 0xH4000)
-; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %6 = frem <8 x half> undef, splat (half 0xH4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %1 = fdiv <2 x half> undef, splat (half f0x4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %2 = frem <2 x half> undef, splat (half f0x4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %3 = fdiv <4 x half> undef, splat (half f0x4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %4 = frem <4 x half> undef, splat (half f0x4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %5 = fdiv <8 x half> undef, splat (half f0x4000)
+; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 48 for instruction: %6 = frem <8 x half> undef, splat (half f0x4000)
; CHECK-V8R-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
%1 = fdiv <2 x half> undef, <half 2., half 2.>
diff --git a/llvm/test/Analysis/CostModel/ARM/reduce-fp.ll b/llvm/test/Analysis/CostModel/ARM/reduce-fp.ll
index 87de486eeb1839..360c1520377f09 100644
--- a/llvm/test/Analysis/CostModel/ARM/reduce-fp.ll
+++ b/llvm/test/Analysis/CostModel/ARM/reduce-fp.ll
@@ -7,10 +7,10 @@ target datalayout = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128"
define void @fadd_strict() {
; CHECK-V8-LABEL: 'fadd_strict'
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 64 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 64 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -20,10 +20,10 @@ define void @fadd_strict() {
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEFP-LABEL: 'fadd_strict'
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -33,10 +33,10 @@ define void @fadd_strict() {
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEI-LABEL: 'fadd_strict'
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v2f16 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f16 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v8f16 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fadd_v16f16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v2f32 = call float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v4f32 = call float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fadd_v8f32 = call float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -61,10 +61,10 @@ define void @fadd_strict() {
define void @fadd_unordered() {
; CHECK-V8-LABEL: 'fadd_unordered'
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 44 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 92 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 44 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 92 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v2f32 = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v4f32 = call reassoc float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v8f32 = call reassoc float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -74,10 +74,10 @@ define void @fadd_unordered() {
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEFP-LABEL: 'fadd_unordered'
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fadd_v2f32 = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fadd_v4f32 = call reassoc float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fadd_v8f32 = call reassoc float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -87,10 +87,10 @@ define void @fadd_unordered() {
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEI-LABEL: 'fadd_unordered'
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 169 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 225 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fadd_v2f16 = call reassoc half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fadd_v4f16 = call reassoc half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 169 for instruction: %fadd_v8f16 = call reassoc half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 225 for instruction: %fadd_v16f16 = call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fadd_v2f32 = call reassoc float @llvm.vector.reduce.fadd.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fadd_v4f32 = call reassoc float @llvm.vector.reduce.fadd.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 85 for instruction: %fadd_v8f32 = call reassoc float @llvm.vector.reduce.fadd.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -114,10 +114,10 @@ define void @fadd_unordered() {
define void @fmul_strict() {
; CHECK-V8-LABEL: 'fmul_strict'
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 64 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 64 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v2f32 = call float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v4f32 = call float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v8f32 = call float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -127,10 +127,10 @@ define void @fmul_strict() {
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEFP-LABEL: 'fmul_strict'
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 24 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fmul_v2f32 = call float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v4f32 = call float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v8f32 = call float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -140,10 +140,10 @@ define void @fmul_strict() {
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEI-LABEL: 'fmul_strict'
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v2f16 = call half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v4f16 = call half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v8f16 = call half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 32 for instruction: %fmul_v16f16 = call half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v2f32 = call float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v4f32 = call float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %fmul_v8f32 = call float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -168,10 +168,10 @@ define void @fmul_strict() {
define void @fmul_unordered() {
; CHECK-V8-LABEL: 'fmul_unordered'
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 44 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 92 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 20 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 44 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 92 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v2f32 = call reassoc float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v4f32 = call reassoc float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v8f32 = call reassoc float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -181,10 +181,10 @@ define void @fmul_unordered() {
; CHECK-V8-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEFP-LABEL: 'fmul_unordered'
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %fmul_v2f32 = call reassoc float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %fmul_v4f32 = call reassoc float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %fmul_v8f32 = call reassoc float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
@@ -194,10 +194,10 @@ define void @fmul_unordered() {
; CHECK-MVEFP-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; CHECK-MVEI-LABEL: 'fmul_unordered'
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 169 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 225 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fmul_v2f16 = call reassoc half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fmul_v4f16 = call reassoc half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 169 for instruction: %fmul_v8f16 = call reassoc half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 225 for instruction: %fmul_v16f16 = call reassoc half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %fmul_v2f32 = call reassoc float @llvm.vector.reduce.fmul.v2f32(float 0.000000e+00, <2 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 57 for instruction: %fmul_v4f32 = call reassoc float @llvm.vector.reduce.fmul.v4f32(float 0.000000e+00, <4 x float> undef)
; CHECK-MVEI-NEXT: Cost Model: Found an estimated cost of 85 for instruction: %fmul_v8f32 = call reassoc float @llvm.vector.reduce.fmul.v8f32(float 0.000000e+00, <8 x float> undef)
diff --git a/llvm/test/Analysis/CostModel/RISCV/phi-const.ll b/llvm/test/Analysis/CostModel/RISCV/phi-const.ll
index 00ff1925fc06c9..89b088759d98b2 100644
--- a/llvm/test/Analysis/CostModel/RISCV/phi-const.ll
+++ b/llvm/test/Analysis/CostModel/RISCV/phi-const.ll
@@ -132,7 +132,7 @@ define half @phi_f16(i1 %c) {
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br i1 %c, label %a, label %b
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
-; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi half [ 0xHE3CE, %a ], [ 0xH5144, %b ]
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi half [ f0xE3CE, %a ], [ f0x5144, %b ]
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret half %x
;
br i1 %c, label %a, label %b
diff --git a/llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll b/llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll
index 1762f701a9b2d5..b5093f11314353 100644
--- a/llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll
+++ b/llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll
@@ -5,37 +5,37 @@
define void @reduce_fadd_bfloat() {
; FP-REDUCE-LABEL: 'reduce_fadd_bfloat'
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_fadd_bfloat'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call fast bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0.0, <1 x bfloat> undef)
@@ -57,54 +57,54 @@ define void @reduce_fadd_bfloat() {
define void @reduce_fadd_half() {
; FP-REDUCE-ZVFH-LABEL: 'reduce_fadd_half'
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; FP-REDUCE-ZVFHMIN-LABEL: 'reduce_fadd_half'
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_fadd_half'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V4 = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V8 = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V16 = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %v32 = call fast half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V64 = call fast half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V128 = call fast half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call fast half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV2 = call fast half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV4 = call fast half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV8 = call fast half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV16 = call fast half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV32 = call fast half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call fast half @llvm.vector.reduce.fadd.v1f16(half 0.0, <1 x half> undef)
@@ -221,37 +221,37 @@ define void @reduce_fadd_double() {
define void @reduce_ordered_fadd_bfloat() {
; FP-REDUCE-LABEL: 'reduce_ordered_fadd_bfloat'
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_ordered_fadd_bfloat'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call bfloat @llvm.vector.reduce.fadd.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call bfloat @llvm.vector.reduce.fadd.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call bfloat @llvm.vector.reduce.fadd.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call bfloat @llvm.vector.reduce.fadd.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call bfloat @llvm.vector.reduce.fadd.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fadd.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fadd.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fadd.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fadd.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fadd.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fadd.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call bfloat @llvm.vector.reduce.fadd.v1bf16(bfloat 0.0, <1 x bfloat> undef)
@@ -273,54 +273,54 @@ define void @reduce_ordered_fadd_bfloat() {
define void @reduce_ordered_fadd_half() {
; FP-REDUCE-ZVFH-LABEL: 'reduce_ordered_fadd_half'
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 66 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 130 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 66 for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 66 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 130 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 10 for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 18 for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 66 for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; FP-REDUCE-ZVFHMIN-LABEL: 'reduce_ordered_fadd_half'
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_ordered_fadd_half'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half 0xH0000, <1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half 0xH0000, <32 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half 0xH0000, <64 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half 0xH0000, <128 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fadd.v1f16(half f0x0000, <1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V2 = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V4 = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V8 = call half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V16 = call half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %v32 = call half @llvm.vector.reduce.fadd.v32f16(half f0x0000, <32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V64 = call half @llvm.vector.reduce.fadd.v64f16(half f0x0000, <64 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V128 = call half @llvm.vector.reduce.fadd.v128f16(half f0x0000, <128 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV1 = call half @llvm.vector.reduce.fadd.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV2 = call half @llvm.vector.reduce.fadd.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV4 = call half @llvm.vector.reduce.fadd.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV8 = call half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV16 = call half @llvm.vector.reduce.fadd.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %NXV32 = call half @llvm.vector.reduce.fadd.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call half @llvm.vector.reduce.fadd.v1f16(half 0.0, <1 x half> undef)
diff --git a/llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll b/llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll
index 211bcb1343eea4..fb88daf80be707 100644
--- a/llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll
+++ b/llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll
@@ -5,37 +5,37 @@
define void @reduce_fmul_bfloat() {
; FP-REDUCE-LABEL: 'reduce_fmul_bfloat'
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_fmul_bfloat'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call fast bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0.0, <1 x bfloat> undef)
@@ -57,54 +57,54 @@ define void @reduce_fmul_bfloat() {
define void @reduce_fmul_half() {
; FP-REDUCE-ZVFH-LABEL: 'reduce_fmul_half'
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 13 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 19 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 49 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 151 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 13 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 19 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 49 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 151 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFH-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFH-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; FP-REDUCE-ZVFHMIN-LABEL: 'reduce_fmul_half'
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 28 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 73 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 211 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 541 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 573 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-ZVFHMIN-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_fmul_half'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half 0xH0000, <1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half 0xH0000, <32 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half 0xH0000, <64 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half 0xH0000, <128 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half f0x0000, <1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 6 for instruction: %V2 = call fast half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call fast half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V8 = call fast half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 21 for instruction: %V16 = call fast half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 26 for instruction: %v32 = call fast half @llvm.vector.reduce.fmul.v32f16(half f0x0000, <32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V64 = call fast half @llvm.vector.reduce.fmul.v64f16(half f0x0000, <64 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 34 for instruction: %V128 = call fast half @llvm.vector.reduce.fmul.v128f16(half f0x0000, <128 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call fast half @llvm.vector.reduce.fmul.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call fast half @llvm.vector.reduce.fmul.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call fast half @llvm.vector.reduce.fmul.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call fast half @llvm.vector.reduce.fmul.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call fast half @llvm.vector.reduce.fmul.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call fast half @llvm.vector.reduce.fmul.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call fast half @llvm.vector.reduce.fmul.v1f16(half 0.0, <1 x half> undef)
@@ -221,37 +221,37 @@ define void @reduce_fmul_double() {
define void @reduce_ordered_fmul_bfloat() {
; FP-REDUCE-LABEL: 'reduce_ordered_fmul_bfloat'
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_ordered_fmul_bfloat'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0xR0000, <1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat 0xR0000, <2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat 0xR0000, <4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat 0xR0000, <8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat 0xR0000, <16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat 0xR0000, <32 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat 0xR0000, <64 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat 0xR0000, <128 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat 0xR0000, <vscale x 1 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat 0xR0000, <vscale x 2 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat 0xR0000, <vscale x 4 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat 0xR0000, <vscale x 8 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat 0xR0000, <vscale x 16 x bfloat> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat 0xR0000, <vscale x 32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat f0x0000, <1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call bfloat @llvm.vector.reduce.fmul.v2bf16(bfloat f0x0000, <2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call bfloat @llvm.vector.reduce.fmul.v4bf16(bfloat f0x0000, <4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call bfloat @llvm.vector.reduce.fmul.v8bf16(bfloat f0x0000, <8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call bfloat @llvm.vector.reduce.fmul.v16bf16(bfloat f0x0000, <16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call bfloat @llvm.vector.reduce.fmul.v32bf16(bfloat f0x0000, <32 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call bfloat @llvm.vector.reduce.fmul.v64bf16(bfloat f0x0000, <64 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call bfloat @llvm.vector.reduce.fmul.v128bf16(bfloat f0x0000, <128 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call bfloat @llvm.vector.reduce.fmul.nxv1bf16(bfloat f0x0000, <vscale x 1 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call bfloat @llvm.vector.reduce.fmul.nxv2bf16(bfloat f0x0000, <vscale x 2 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call bfloat @llvm.vector.reduce.fmul.nxv4bf16(bfloat f0x0000, <vscale x 4 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call bfloat @llvm.vector.reduce.fmul.nxv8bf16(bfloat f0x0000, <vscale x 8 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call bfloat @llvm.vector.reduce.fmul.nxv16bf16(bfloat f0x0000, <vscale x 16 x bfloat> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call bfloat @llvm.vector.reduce.fmul.nxv32bf16(bfloat f0x0000, <vscale x 32 x bfloat> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call bfloat @llvm.vector.reduce.fmul.v1bf16(bfloat 0.0, <1 x bfloat> undef)
@@ -273,37 +273,37 @@ define void @reduce_ordered_fmul_bfloat() {
define void @reduce_ordered_fmul_half() {
; FP-REDUCE-LABEL: 'reduce_ordered_fmul_half'
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fmul.v1f16(half 0xH0000, <1 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call half @llvm.vector.reduce.fmul.v32f16(half 0xH0000, <32 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call half @llvm.vector.reduce.fmul.v64f16(half 0xH0000, <64 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call half @llvm.vector.reduce.fmul.v128f16(half 0xH0000, <128 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fmul.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fmul.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fmul.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fmul.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fmul.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fmul.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %V1 = call half @llvm.vector.reduce.fmul.v1f16(half f0x0000, <1 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 7 for instruction: %V2 = call half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 15 for instruction: %V4 = call half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 31 for instruction: %V8 = call half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 63 for instruction: %V16 = call half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 127 for instruction: %v32 = call half @llvm.vector.reduce.fmul.v32f16(half f0x0000, <32 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 255 for instruction: %V64 = call half @llvm.vector.reduce.fmul.v64f16(half f0x0000, <64 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 510 for instruction: %V128 = call half @llvm.vector.reduce.fmul.v128f16(half f0x0000, <128 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fmul.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fmul.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fmul.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fmul.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fmul.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; FP-REDUCE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fmul.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; FP-REDUCE-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
; SIZE-LABEL: 'reduce_ordered_fmul_half'
-; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call half @llvm.vector.reduce.fmul.v1f16(half 0xH0000, <1 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call half @llvm.vector.reduce.fmul.v2f16(half 0xH0000, <2 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call half @llvm.vector.reduce.fmul.v4f16(half 0xH0000, <4 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call half @llvm.vector.reduce.fmul.v8f16(half 0xH0000, <8 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call half @llvm.vector.reduce.fmul.v16f16(half 0xH0000, <16 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call half @llvm.vector.reduce.fmul.v32f16(half 0xH0000, <32 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call half @llvm.vector.reduce.fmul.v64f16(half 0xH0000, <64 x half> undef)
-; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call half @llvm.vector.reduce.fmul.v128f16(half 0xH0000, <128 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fmul.nxv1f16(half 0xH0000, <vscale x 1 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fmul.nxv2f16(half 0xH0000, <vscale x 2 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fmul.nxv4f16(half 0xH0000, <vscale x 4 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fmul.nxv8f16(half 0xH0000, <vscale x 8 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fmul.nxv16f16(half 0xH0000, <vscale x 16 x half> undef)
-; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fmul.nxv32f16(half 0xH0000, <vscale x 32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V1 = call half @llvm.vector.reduce.fmul.v1f16(half f0x0000, <1 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 5 for instruction: %V2 = call half @llvm.vector.reduce.fmul.v2f16(half f0x0000, <2 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 11 for instruction: %V4 = call half @llvm.vector.reduce.fmul.v4f16(half f0x0000, <4 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 23 for instruction: %V8 = call half @llvm.vector.reduce.fmul.v8f16(half f0x0000, <8 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 47 for instruction: %V16 = call half @llvm.vector.reduce.fmul.v16f16(half f0x0000, <16 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 95 for instruction: %v32 = call half @llvm.vector.reduce.fmul.v32f16(half f0x0000, <32 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 191 for instruction: %V64 = call half @llvm.vector.reduce.fmul.v64f16(half f0x0000, <64 x half> undef)
+; SIZE-NEXT: Cost Model: Found an estimated cost of 382 for instruction: %V128 = call half @llvm.vector.reduce.fmul.v128f16(half f0x0000, <128 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV1 = call half @llvm.vector.reduce.fmul.nxv1f16(half f0x0000, <vscale x 1 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV2 = call half @llvm.vector.reduce.fmul.nxv2f16(half f0x0000, <vscale x 2 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV4 = call half @llvm.vector.reduce.fmul.nxv4f16(half f0x0000, <vscale x 4 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV8 = call half @llvm.vector.reduce.fmul.nxv8f16(half f0x0000, <vscale x 8 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV16 = call half @llvm.vector.reduce.fmul.nxv16f16(half f0x0000, <vscale x 16 x half> undef)
+; SIZE-NEXT: Cost Model: Invalid cost for instruction: %NXV32 = call half @llvm.vector.reduce.fmul.nxv32f16(half f0x0000, <vscale x 32 x half> undef)
; SIZE-NEXT: Cost Model: Found an estimated cost of 1 for instruction: ret void
;
%V1 = call half @llvm.vector.reduce.fmul.v1f16(half 0.0, <1 x half> undef)
diff --git a/llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll b/llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll
index 344a34fa9a6309..16b6a4d4af4be9 100644
--- a/llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll
+++ b/llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll
@@ -316,7 +316,7 @@ define <4 x half> @phi_v4f16_splat(i1 %c) {
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br i1 %c, label %a, label %b
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
-; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ splat (half 0xH3C00), %a ], [ <half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4D00>, %b ]
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ splat (half f0x3C00), %a ], [ <half f0x4000, half f0x4000, half f0x4000, half f0x4D00>, %b ]
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret <4 x half> %x
;
br i1 %c, label %a, label %b
@@ -334,7 +334,7 @@ define <4 x half> @phi_v4f16(i1 %c) {
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br i1 %c, label %a, label %b
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
-; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ <half 0xH3C00, half 0xH4000, half 0xH4200, half 0xH4400>, %a ], [ <half 0xH4000, half 0xH4400, half 0xH4600, half 0xH4800>, %b ]
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ <half f0x3C00, half f0x4000, half f0x4200, half f0x4400>, %a ], [ <half f0x4000, half f0x4400, half f0x4600, half f0x4800>, %b ]
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret <4 x half> %x
;
br i1 %c, label %a, label %b
@@ -353,7 +353,7 @@ define <4 x half> @phi_v4f16_cheap_and_expensive(i1 %c) {
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br i1 %c, label %a, label %b
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: br label %d
-; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ splat (half 0xH3C00), %a ], [ <half 0xH6F42, half 0xHECB8, half 0xH5DF6, half 0xH4A40>, %b ]
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: %x = phi <4 x half> [ splat (half f0x3C00), %a ], [ <half f0x6F42, half f0xECB8, half f0x5DF6, half f0x4A40>, %b ]
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret <4 x half> %x
;
br i1 %c, label %a, label %b
diff --git a/llvm/test/Analysis/Lint/scalable.ll b/llvm/test/Analysis/Lint/scalable.ll
index bc12d6738d2aa3..9f12389b25a433 100644
--- a/llvm/test/Analysis/Lint/scalable.ll
+++ b/llvm/test/Analysis/Lint/scalable.ll
@@ -16,7 +16,7 @@ define <vscale x 8 x i8> @alloca_access2() {
; CHECK-NOT: insertelement index out of range
define <vscale x 8 x half> @insertelement() {
- %insert = insertelement <vscale x 8 x half> poison, half 0xH0000, i64 100
+ %insert = insertelement <vscale x 8 x half> poison, half f0x0000, i64 100
ret <vscale x 8 x half> %insert
}
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db7..0ab376f2338f3f 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -9,8 +9,8 @@ define bfloat @check_bfloat(bfloat %A) {
}
define bfloat @check_bfloat_literal() {
-; ASSEM-DISASS: ret bfloat 0xR3149
- ret bfloat 0xR3149
+; ASSEM-DISASS: ret bfloat f0x3149
+ ret bfloat f0x3149
}
define <4 x bfloat> @check_fixed_vector() {
@@ -26,37 +26,37 @@ define <vscale x 4 x bfloat> @check_vector() {
}
define bfloat @check_bfloat_constprop() {
- %tmp = fadd bfloat 0xR40C0, 0xR40C0
-; OPT: 0xR4140
+ %tmp = fadd bfloat f0x40C0, f0x40C0
+; OPT: f0x4140
ret bfloat %tmp
}
define float @check_bfloat_convert() {
- %tmp = fpext bfloat 0xR4C8D to float
+ %tmp = fpext bfloat f0x4C8D to float
; OPT: 0x4191A00000000000
ret float %tmp
}
; ASSEM-DISASS-LABEL @snan_bfloat
define bfloat @snan_bfloat() {
-; ASSEM-DISASS: ret bfloat 0xR7F81
- ret bfloat 0xR7F81
+; ASSEM-DISASS: ret bfloat f0x7F81
+ ret bfloat f0x7F81
}
; ASSEM-DISASS-LABEL @qnan_bfloat
define bfloat @qnan_bfloat() {
-; ASSEM-DISASS: ret bfloat 0xR7FC0
- ret bfloat 0xR7FC0
+; ASSEM-DISASS: ret bfloat f0x7FC0
+ ret bfloat f0x7FC0
}
; ASSEM-DISASS-LABEL @pos_inf_bfloat
define bfloat @pos_inf_bfloat() {
-; ASSEM-DISASS: ret bfloat 0xR7F80
- ret bfloat 0xR7F80
+; ASSEM-DISASS: ret bfloat f0x7F80
+ ret bfloat f0x7F80
}
; ASSEM-DISASS-LABEL @neg_inf_bfloat
define bfloat @neg_inf_bfloat() {
-; ASSEM-DISASS: ret bfloat 0xRFF80
- ret bfloat 0xRFF80
+; ASSEM-DISASS: ret bfloat f0xFF80
+ ret bfloat f0xFF80
}
diff --git a/llvm/test/Assembler/constant-splat.ll b/llvm/test/Assembler/constant-splat.ll
index 82e25adda0e108..8bc63c335e63bc 100644
--- a/llvm/test/Assembler/constant-splat.ll
+++ b/llvm/test/Assembler/constant-splat.ll
@@ -15,8 +15,8 @@
; CHECK: @constant.splat.i128 = constant <2 x i128> splat (i128 85070591730234615870450834276742070272)
@constant.splat.i128 = constant <2 x i128> splat (i128 85070591730234615870450834276742070272)
-; CHECK: @constant.splat.f16 = constant <4 x half> splat (half 0xHBC00)
- at constant.splat.f16 = constant <4 x half> splat (half 0xHBC00)
+; CHECK: @constant.splat.f16 = constant <4 x half> splat (half f0xBC00)
+ at constant.splat.f16 = constant <4 x half> splat (half f0xBC00)
; CHECK: @constant.splat.f32 = constant <5 x float> splat (float -2.000000e+00)
@constant.splat.f32 = constant <5 x float> splat (float -2.000000e+00)
@@ -24,17 +24,17 @@
; CHECK: @constant.splat.f64 = constant <3 x double> splat (double -3.000000e+00)
@constant.splat.f64 = constant <3 x double> splat (double -3.000000e+00)
-; CHECK: @constant.splat.128 = constant <2 x fp128> splat (fp128 0xL00000000000000018000000000000000)
- at constant.splat.128 = constant <2 x fp128> splat (fp128 0xL00000000000000018000000000000000)
+; CHECK: @constant.splat.128 = constant <2 x fp128> splat (fp128 f0x80000000000000000000000000000001)
+ at constant.splat.128 = constant <2 x fp128> splat (fp128 f0x80000000000000000000000000000001)
-; CHECK: @constant.splat.bf16 = constant <4 x bfloat> splat (bfloat 0xRC0A0)
- at constant.splat.bf16 = constant <4 x bfloat> splat (bfloat 0xRC0A0)
+; CHECK: @constant.splat.bf16 = constant <4 x bfloat> splat (bfloat f0xC0A0)
+ at constant.splat.bf16 = constant <4 x bfloat> splat (bfloat f0xC0A0)
-; CHECK: @constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 0xK4000C8F5C28F5C28F800)
- at constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 0xK4000C8F5C28F5C28F800)
+; CHECK: @constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 f0x4000C8F5C28F5C28F800)
+ at constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 f0x4000C8F5C28F5C28F800)
-; CHECK: @constant.splat.ppc_fp128 = constant <1 x ppc_fp128> splat (ppc_fp128 0xM80000000000000000000000000000000)
- at constant.splat.ppc_fp128 = constant <1 x ppc_fp128> splat (ppc_fp128 0xM80000000000000000000000000000000)
+; CHECK: @constant.splat.ppc_fp128 = constant <1 x ppc_fp128> splat (ppc_fp128 f0x00000000000000008000000000000000)
+ at constant.splat.ppc_fp128 = constant <1 x ppc_fp128> splat (ppc_fp128 f0x00000000000000008000000000000000)
; CHECK: @constant.splat.global.ptr = constant <4 x ptr> <ptr @my_global, ptr @my_global, ptr @my_global, ptr @my_global>
@constant.splat.global.ptr = constant <4 x ptr> splat (ptr @my_global)
diff --git a/llvm/test/Assembler/half-constprop.ll b/llvm/test/Assembler/half-constprop.ll
index d26545d5584e07..87ddbe51e9de99 100644
--- a/llvm/test/Assembler/half-constprop.ll
+++ b/llvm/test/Assembler/half-constprop.ll
@@ -7,12 +7,12 @@ entry:
%a = alloca half, align 2
%b = alloca half, align 2
%.compoundliteral = alloca float, align 4
- store half 0xH4200, ptr %a, align 2
- store half 0xH4B9A, ptr %b, align 2
+ store half f0x4200, ptr %a, align 2
+ store half f0x4B9A, ptr %b, align 2
%tmp = load half, ptr %a, align 2
%tmp1 = load half, ptr %b, align 2
%add = fadd half %tmp, %tmp1
-; CHECK: 0xH4C8D
+; CHECK: f0x4C8D
ret half %add
}
diff --git a/llvm/test/Assembler/half-conv.ll b/llvm/test/Assembler/half-conv.ll
index 219c5b065611ab..84c800852fc306 100644
--- a/llvm/test/Assembler/half-conv.ll
+++ b/llvm/test/Assembler/half-conv.ll
@@ -6,7 +6,7 @@ define float @abc() nounwind {
entry:
%a = alloca half, align 2
%.compoundliteral = alloca float, align 4
- store half 0xH4C8D, ptr %a, align 2
+ store half f0x4C8D, ptr %a, align 2
%tmp = load half, ptr %a, align 2
%conv = fpext half %tmp to float
; CHECK: 0x4032340000000000
diff --git a/llvm/test/Assembler/invalid-fp80hex.ll b/llvm/test/Assembler/invalid-fp80hex.ll
index 70c518dd648ea9..4091121b62bf99 100644
--- a/llvm/test/Assembler/invalid-fp80hex.ll
+++ b/llvm/test/Assembler/invalid-fp80hex.ll
@@ -3,4 +3,4 @@
; Tests bug: 24640
; CHECK: expected '=' in global variable
- at - 0xKate potb8ed
+ at - f0xate potb8ed
diff --git a/llvm/test/Assembler/short-hexpair.ll b/llvm/test/Assembler/short-hexpair.ll
index 067ea30b0ddb32..830c5038100310 100644
--- a/llvm/test/Assembler/short-hexpair.ll
+++ b/llvm/test/Assembler/short-hexpair.ll
@@ -1,4 +1,4 @@
; RUN: llvm-as < %s | llvm-dis | FileCheck %s
@x = global fp128 0xL01
-; CHECK: @x = global fp128 0xL00000000000000000000000000000001
+; CHECK: @x = global fp128 f0x00000000000000010000000000000000
diff --git a/llvm/test/Assembler/unnamed.ll b/llvm/test/Assembler/unnamed.ll
index 1f4eef5d9cceff..f2e81d936b6f73 100644
--- a/llvm/test/Assembler/unnamed.ll
+++ b/llvm/test/Assembler/unnamed.ll
@@ -13,7 +13,7 @@ module asm "this is another inline asm block"
@0 = global i32 0
@1 = global float 3.0
@2 = global ptr null
- at 3 = global x86_fp80 0xK4001E000000000000000
+ at 3 = global x86_fp80 f0x4001E000000000000000
define float @foo(ptr %p) nounwind {
%t = load %0, ptr %p ; <%0> [#uses=2]
diff --git a/llvm/test/Bitcode/compatibility-3.8.ll b/llvm/test/Bitcode/compatibility-3.8.ll
index 7f766aa34a005f..e1fdd6c2d22ec5 100644
--- a/llvm/test/Bitcode/compatibility-3.8.ll
+++ b/llvm/test/Bitcode/compatibility-3.8.ll
@@ -53,7 +53,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -68,7 +68,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/compatibility-3.9.ll b/llvm/test/Bitcode/compatibility-3.9.ll
index c8309175e063f0..2de7c975261403 100644
--- a/llvm/test/Bitcode/compatibility-3.9.ll
+++ b/llvm/test/Bitcode/compatibility-3.9.ll
@@ -53,7 +53,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -68,7 +68,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/compatibility-4.0.ll b/llvm/test/Bitcode/compatibility-4.0.ll
index adbd91ac6c7fe5..0f2990a35f73db 100644
--- a/llvm/test/Bitcode/compatibility-4.0.ll
+++ b/llvm/test/Bitcode/compatibility-4.0.ll
@@ -53,7 +53,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -68,7 +68,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/compatibility-5.0.ll b/llvm/test/Bitcode/compatibility-5.0.ll
index 1b500da69568af..c1fc6c8ea283ac 100644
--- a/llvm/test/Bitcode/compatibility-5.0.ll
+++ b/llvm/test/Bitcode/compatibility-5.0.ll
@@ -53,7 +53,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -68,7 +68,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/compatibility-6.0.ll b/llvm/test/Bitcode/compatibility-6.0.ll
index c1abbf0cda6eb9..6329d7c6716f57 100644
--- a/llvm/test/Bitcode/compatibility-6.0.ll
+++ b/llvm/test/Bitcode/compatibility-6.0.ll
@@ -53,7 +53,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -68,7 +68,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/compatibility.ll b/llvm/test/Bitcode/compatibility.ll
index a28156cdaa2797..dd6088eaee2e50 100644
--- a/llvm/test/Bitcode/compatibility.ll
+++ b/llvm/test/Bitcode/compatibility.ll
@@ -56,7 +56,7 @@ $comdat.samesize = comdat samesize
@constant.array.i32 = constant [3 x i32] [i32 -0, i32 1, i32 0]
; CHECK: @constant.array.i64 = constant [3 x i64] [i64 0, i64 1, i64 0]
@constant.array.i64 = constant [3 x i64] [i64 -0, i64 1, i64 0]
-; CHECK: @constant.array.f16 = constant [3 x half] [half 0xH8000, half 0xH3C00, half 0xH0000]
+; CHECK: @constant.array.f16 = constant [3 x half] [half f0x8000, half f0x3C00, half f0x0000]
@constant.array.f16 = constant [3 x half] [half -0.0, half 1.0, half 0.0]
; CHECK: @constant.array.f32 = constant [3 x float] [float -0.000000e+00, float 1.000000e+00, float 0.000000e+00]
@constant.array.f32 = constant [3 x float] [float -0.0, float 1.0, float 0.0]
@@ -71,7 +71,7 @@ $comdat.samesize = comdat samesize
@constant.vector.i32 = constant <3 x i32> <i32 -0, i32 1, i32 0>
; CHECK: @constant.vector.i64 = constant <3 x i64> <i64 0, i64 1, i64 0>
@constant.vector.i64 = constant <3 x i64> <i64 -0, i64 1, i64 0>
-; CHECK: @constant.vector.f16 = constant <3 x half> <half 0xH8000, half 0xH3C00, half 0xH0000>
+; CHECK: @constant.vector.f16 = constant <3 x half> <half f0x8000, half f0x3C00, half f0x0000>
@constant.vector.f16 = constant <3 x half> <half -0.0, half 1.0, half 0.0>
; CHECK: @constant.vector.f32 = constant <3 x float> <float -0.000000e+00, float 1.000000e+00, float 0.000000e+00>
@constant.vector.f32 = constant <3 x float> <float -0.0, float 1.0, float 0.0>
diff --git a/llvm/test/Bitcode/constant-splat.ll b/llvm/test/Bitcode/constant-splat.ll
index 2bcc3ddf3e4f3a..c84ee926deb8bb 100644
--- a/llvm/test/Bitcode/constant-splat.ll
+++ b/llvm/test/Bitcode/constant-splat.ll
@@ -17,8 +17,8 @@
; CHECK: @constant.splat.i128 = constant <7 x i128> splat (i128 85070591730234615870450834276742070272)
@constant.splat.i128 = constant <7 x i128> splat (i128 85070591730234615870450834276742070272)
-; CHECK: @constant.splat.f16 = constant <2 x half> splat (half 0xHBC00)
- at constant.splat.f16 = constant <2 x half> splat (half 0xHBC00)
+; CHECK: @constant.splat.f16 = constant <2 x half> splat (half f0xBC00)
+ at constant.splat.f16 = constant <2 x half> splat (half f0xBC00)
; CHECK: @constant.splat.f32 = constant <4 x float> splat (float -2.000000e+00)
@constant.splat.f32 = constant <4 x float> splat (float -2.000000e+00)
@@ -26,17 +26,17 @@
; CHECK: @constant.splat.f64 = constant <6 x double> splat (double -3.000000e+00)
@constant.splat.f64 = constant <6 x double> splat (double -3.000000e+00)
-; CHECK: @constant.splat.128 = constant <8 x fp128> splat (fp128 0xL00000000000000018000000000000000)
- at constant.splat.128 = constant <8 x fp128> splat (fp128 0xL00000000000000018000000000000000)
+; CHECK: @constant.splat.128 = constant <8 x fp128> splat (fp128 f0x80000000000000000000000000000001)
+ at constant.splat.128 = constant <8 x fp128> splat (fp128 f0x80000000000000000000000000000001)
-; CHECK: @constant.splat.bf16 = constant <1 x bfloat> splat (bfloat 0xRC0A0)
- at constant.splat.bf16 = constant <1 x bfloat> splat (bfloat 0xRC0A0)
+; CHECK: @constant.splat.bf16 = constant <1 x bfloat> splat (bfloat f0xC0A0)
+ at constant.splat.bf16 = constant <1 x bfloat> splat (bfloat f0xC0A0)
-; CHECK: @constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 0xK4000C8F5C28F5C28F800)
- at constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 0xK4000C8F5C28F5C28F800)
+; CHECK: @constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 f0x4000C8F5C28F5C28F800)
+ at constant.splat.x86_fp80 = constant <3 x x86_fp80> splat (x86_fp80 f0x4000C8F5C28F5C28F800)
-; CHECK: @constant.splat.ppc_fp128 = constant <7 x ppc_fp128> splat (ppc_fp128 0xM80000000000000000000000000000000)
- at constant.splat.ppc_fp128 = constant <7 x ppc_fp128> splat (ppc_fp128 0xM80000000000000000000000000000000)
+; CHECK: @constant.splat.ppc_fp128 = constant <7 x ppc_fp128> splat (ppc_fp128 f0x00000000000000008000000000000000)
+ at constant.splat.ppc_fp128 = constant <7 x ppc_fp128> splat (ppc_fp128 f0x00000000000000008000000000000000)
define void @add_fixed_lenth_vector_splat_i32(<4 x i32> %a) {
; CHECK: %add = add <4 x i32> %a, splat (i32 137)
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
index 7a67cf3fd4c942..7722aba9469251 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
@@ -1820,11 +1820,11 @@ define <16 x i8> @test_shufflevector_v8s8_v16s8(<8 x i8> %arg1, <8 x i8> %arg2)
; CHECK-LABEL: test_constant_vector
; CHECK: [[UNDEF:%[0-9]+]]:_(s16) = G_IMPLICIT_DEF
-; CHECK: [[F:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+; CHECK: [[F:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; CHECK: [[M:%[0-9]+]]:_(<4 x s16>) = G_BUILD_VECTOR [[UNDEF]](s16), [[UNDEF]](s16), [[UNDEF]](s16), [[F]](s16)
; CHECK: $d0 = COPY [[M]](<4 x s16>)
define <4 x half> @test_constant_vector() {
- ret <4 x half> <half undef, half undef, half undef, half 0xH3C00>
+ ret <4 x half> <half undef, half undef, half undef, half f0x3C00>
}
define i32 @test_target_mem_intrinsic(ptr %addr) {
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir
index a543e7cd4c7e4f..c9896101bcb1e5 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir
@@ -35,9 +35,9 @@ name: test_combine_half_fabs_neg_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_half_fabs_neg_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4580
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4580
; CHECK: $h0 = COPY [[C]](s16)
- %0:_(s16) = G_FCONSTANT half 0xHC580
+ %0:_(s16) = G_FCONSTANT half f0xC580
%1:_(s16) = G_FABS %0
$h0 = COPY %1(s16)
...
@@ -46,9 +46,9 @@ name: test_combine_half_fabs_pos_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_half_fabs_pos_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4580
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4580
; CHECK: $h0 = COPY [[C]](s16)
- %0:_(s16) = G_FCONSTANT half 0xH4580
+ %0:_(s16) = G_FCONSTANT half f0x4580
%1:_(s16) = G_FABS %0
$h0 = COPY %1(s16)
...
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir
index 9e7e279e9e1a3e..5add6134976e18 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir
@@ -6,7 +6,7 @@ name: test_combine_half_flog2_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_half_flog2_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4000
; CHECK: $h0 = COPY [[C]](s16)
%0:_(s16) = G_FCONSTANT half 4.000000e+00
%1:_(s16) = G_FLOG2 %0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir
index 6e675c00d846ba..74ee3649bd5af8 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir
@@ -7,10 +7,10 @@ body: |
bb.1:
liveins:
; CHECK-LABEL: name: test_combine_nan_rhs_fminimum_half
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH7C01
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x7C01
; CHECK-NEXT: $h0 = COPY [[C]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMINIMUM %0, %1
$h0 = COPY %2
...
@@ -46,10 +46,10 @@ body: |
bb.1:
liveins:
; CHECK-LABEL: name: test_combine_nan_lhs_fminimum_half
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH7C01
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x7C01
; CHECK-NEXT: $h0 = COPY [[C]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMINIMUM %1, %0
$h0 = COPY %2
...
@@ -85,10 +85,10 @@ body: |
bb.1:
liveins:
; CHECK-LABEL: name: test_combine_nan_rhs_fmaximum_half
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH7C01
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x7C01
; CHECK-NEXT: $h0 = COPY [[C]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMAXIMUM %0, %1
$h0 = COPY %2
...
@@ -124,10 +124,10 @@ body: |
bb.1:
liveins:
; CHECK-LABEL: name: test_combine_nan_lhs_fmaximum_half
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH7C01
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x7C01
; CHECK-NEXT: $h0 = COPY [[C]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMAXIMUM %1, %0
$h0 = COPY %2
...
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir
index 9f93205a38a5bf..e55aaec098abe2 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir
@@ -10,7 +10,7 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:_(s16) = COPY $h0
; CHECK-NEXT: $h0 = COPY [[COPY]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMINNUM %0, %1
$h0 = COPY %2
...
@@ -49,7 +49,7 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:_(s16) = COPY $h0
; CHECK-NEXT: $h0 = COPY [[COPY]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMINNUM %1, %0
$h0 = COPY %2
...
@@ -88,7 +88,7 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:_(s16) = COPY $h0
; CHECK-NEXT: $h0 = COPY [[COPY]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMAXNUM %0, %1
$h0 = COPY %2
...
@@ -127,7 +127,7 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:_(s16) = COPY $h0
; CHECK-NEXT: $h0 = COPY [[COPY]](s16)
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH7C01
+ %1:_(s16) = G_FCONSTANT half f0x7C01
%2:_(s16) = G_FMAXNUM %1, %0
$h0 = COPY %2
...
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir
index 1b1077854b4c16..4a97682eb78865 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir
@@ -31,9 +31,9 @@ name: test_combine_half_fneg_neg_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_half_fneg_neg_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4580
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4580
; CHECK: $h0 = COPY [[C]](s16)
- %0:_(s16) = G_FCONSTANT half 0xHC580
+ %0:_(s16) = G_FCONSTANT half f0xC580
%1:_(s16) = G_FNEG %0
$h0 = COPY %1(s16)
...
@@ -42,9 +42,9 @@ name: test_combine_half_fneg_pos_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_half_fneg_pos_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xHC580
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0xC580
; CHECK: $h0 = COPY [[C]](s16)
- %0:_(s16) = G_FCONSTANT half 0xH4580
+ %0:_(s16) = G_FCONSTANT half f0x4580
%1:_(s16) = G_FNEG %0
$h0 = COPY %1(s16)
...
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir
index 1fd7f6f39caca7..a0092ab0ddec48 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir
@@ -6,7 +6,7 @@ name: test_combine_float_to_half_fptrunc_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_float_to_half_fptrunc_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4580
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4580
; CHECK: $h0 = COPY [[C]](s16)
%0:_(s32) = G_FCONSTANT float 5.500000e+00
%1:_(s16) = G_FPTRUNC %0(s32)
@@ -17,7 +17,7 @@ name: test_combine_double_to_half_fptrunc_constant
body: |
bb.1:
; CHECK-LABEL: name: test_combine_double_to_half_fptrunc_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4433
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4433
; CHECK: $h0 = COPY [[C]](s16)
%0:_(s64) = G_FCONSTANT double 4.200000e+00
%1:_(s16) = G_FPTRUNC %0(s64)
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir
index e114d017931675..3c5c20c0317ff4 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir
@@ -7,7 +7,7 @@ body: |
bb.1:
liveins:
; CHECK-LABEL: name: test_combine_half_fsqrt_constant
- ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4000
; CHECK: $h0 = COPY [[C]](s16)
%0:_(s16) = G_FCONSTANT half 4.000000e+00
%1:_(s16) = G_FSQRT %0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir b/llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir
index d24fb62ffab249..f50ef3d12128c7 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir
@@ -8,7 +8,7 @@
%a.addr = alloca fp128, align 16
store fp128 %a, ptr %a.addr, align 16
%0 = load fp128, ptr %a.addr, align 16
- %sub = fsub fp128 0xL00000000000000008000000000000000, %0
+ %sub = fsub fp128 f0x80000000000000000000000000000000, %0
ret fp128 %sub
}
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir
index a0979c5f5d1e02..652ec65dae2267 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir
@@ -15,7 +15,7 @@ body: |
; CHECK: [[LDRQui:%[0-9]+]]:fpr128 = LDRQui [[ADRP]], target-flags(aarch64-pageoff, aarch64-nc) %const.0
; CHECK: $q0 = COPY [[LDRQui]]
; CHECK: RET_ReallyLR implicit $q0
- %0:fpr(s128) = G_FCONSTANT fp128 0xL00000000000000004000000000000000
+ %0:fpr(s128) = G_FCONSTANT fp128 f0x40000000000000000000000000000000
$q0 = COPY %0:fpr(s128)
RET_ReallyLR implicit $q0
...
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir
index 44d6b95eb5491e..0860f68c97e5ad 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir
@@ -12,7 +12,7 @@ body: |
; NO-FP16-NEXT: $h0 = COPY %cst(s16)
; NO-FP16-NEXT: RET_ReallyLR implicit $h0
; FP16-LABEL: name: fp16
- ; FP16: %cst:_(s16) = G_FCONSTANT half 0xH0000
+ ; FP16: %cst:_(s16) = G_FCONSTANT half f0x0000
; FP16-NEXT: $h0 = COPY %cst(s16)
; FP16-NEXT: RET_ReallyLR implicit $h0
%cst:_(s16) = G_FCONSTANT half 0.0
@@ -29,7 +29,7 @@ body: |
; NO-FP16-NEXT: $h0 = COPY %cst(s16)
; NO-FP16-NEXT: RET_ReallyLR implicit $h0
; FP16-LABEL: name: fp16_non_zero
- ; FP16: %cst:_(s16) = G_FCONSTANT half 0xH4000
+ ; FP16: %cst:_(s16) = G_FCONSTANT half f0x4000
; FP16-NEXT: $h0 = COPY %cst(s16)
; FP16-NEXT: RET_ReallyLR implicit $h0
%cst:_(s16) = G_FCONSTANT half 2.0
@@ -47,11 +47,11 @@ body: |
; NO-FP16-NEXT: $w0 = COPY %ext(s32)
; NO-FP16-NEXT: RET_ReallyLR implicit $w0
; FP16-LABEL: name: nan
- ; FP16: %cst:_(s16) = G_FCONSTANT half 0xH7C01
+ ; FP16: %cst:_(s16) = G_FCONSTANT half f0x7C01
; FP16-NEXT: %ext:_(s32) = G_FPEXT %cst(s16)
; FP16-NEXT: $w0 = COPY %ext(s32)
; FP16-NEXT: RET_ReallyLR implicit $w0
- %cst:_(s16) = G_FCONSTANT half 0xH7C01
+ %cst:_(s16) = G_FCONSTANT half f0x7C01
%ext:_(s32) = G_FPEXT %cst(s16)
$w0 = COPY %ext(s32)
RET_ReallyLR implicit $w0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir b/llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir
index 03e507f5eaa7fb..4b1e5cfb8ad1ef 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir
@@ -10,12 +10,12 @@ body: |
; CHECK: liveins: $h0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s16) = COPY $h0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; CHECK-NEXT: [[FMAXIMUM:%[0-9]+]]:_(s16) = G_FMAXIMUM [[COPY]], [[C]]
; CHECK-NEXT: $h0 = COPY [[FMAXIMUM]](s16)
; CHECK-NEXT: RET_ReallyLR implicit $h0
%0:_(s16) = COPY $h0
- %1:_(s16) = G_FCONSTANT half 0xH0000
+ %1:_(s16) = G_FCONSTANT half f0x0000
%2:_(s1) = G_FCMP floatpred(olt), %0(s16), %1
%3:_(s16) = G_SELECT %2(s1), %1, %0
$h0 = COPY %3(s16)
@@ -98,13 +98,13 @@ body: |
; CHECK: liveins: $q0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<8 x s16>) = COPY $q0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; CHECK-NEXT: [[FMAXIMUM:%[0-9]+]]:_(<8 x s16>) = G_FMAXIMUM [[COPY]], [[BUILD_VECTOR]]
; CHECK-NEXT: $q0 = COPY [[FMAXIMUM]](<8 x s16>)
; CHECK-NEXT: RET_ReallyLR implicit $q0
%0:_(<8 x s16>) = COPY $q0
- %2:_(s16) = G_FCONSTANT half 0xH0000
+ %2:_(s16) = G_FCONSTANT half f0x0000
%1:_(<8 x s16>) = G_BUILD_VECTOR %2(s16), %2(s16), %2(s16), %2(s16), %2(s16), %2(s16), %2(s16), %2(s16)
%3:_(<8 x s1>) = G_FCMP floatpred(olt), %0(<8 x s16>), %1
%4:_(<8 x s16>) = G_SELECT %3(<8 x s1>), %1, %0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir b/llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir
index 18f907813de526..9e88d8b6cad0ce 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir
@@ -43,6 +43,6 @@ body: |
; CHECK: [[LDRHui:%[0-9]+]]:fpr16 = LDRHui [[ADRP]], target-flags(aarch64-pageoff, aarch64-nc) %const.0 :: (load (s16) from constant-pool)
; CHECK: $h0 = COPY [[LDRHui]]
; CHECK: RET_ReallyLR implicit $h0
- %0:fpr(s16) = G_FCONSTANT half 0xH000B
+ %0:fpr(s16) = G_FCONSTANT half f0x000B
$h0 = COPY %0(s16)
RET_ReallyLR implicit $h0
diff --git a/llvm/test/CodeGen/AArch64/arm64-aapcs.ll b/llvm/test/CodeGen/AArch64/arm64-aapcs.ll
index 03393ad6aef5c8..b5f64f74613671 100644
--- a/llvm/test/CodeGen/AArch64/arm64-aapcs.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-aapcs.ll
@@ -131,7 +131,7 @@ define half @test_half(float, half %arg) {
define half @test_half_const() {
; CHECK-LABEL: test_half_const:
; CHECK: ldr h0, [x{{[0-9]+}}, :lo12:{{.*}}]
- ret half 0xH4248
+ ret half f0x4248
}
; Check that v4f16 can be passed and returned in registers
diff --git a/llvm/test/CodeGen/AArch64/arm64-build-vector.ll b/llvm/test/CodeGen/AArch64/arm64-build-vector.ll
index 82802c79c70858..788a571ff5424e 100644
--- a/llvm/test/CodeGen/AArch64/arm64-build-vector.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-build-vector.ll
@@ -60,7 +60,7 @@ define void @widen_f16_build_vector(ptr %addr) {
; CHECK-NEXT: movk w8, #13294, lsl #16
; CHECK-NEXT: str w8, [x0]
; CHECK-NEXT: ret
- store <2 x half> <half 0xH33EE, half 0xH33EE>, ptr %addr, align 2
+ store <2 x half> <half f0x33EE, half f0x33EE>, ptr %addr, align 2
ret void
}
diff --git a/llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll b/llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll
index cfb7c60f5a8b00..5466862436c852 100644
--- a/llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll
@@ -37,7 +37,7 @@ define fp128 @baz() optsize {
; CHECK: adrp x[[REG:[0-9]+]], lCPI3_0 at PAGE
; CHECK: ldr q0, [x[[REG]], lCPI3_0 at PAGEOFF]
; CHECK-NEXT: ret
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
; CHECK: literal8
diff --git a/llvm/test/CodeGen/AArch64/arm64-fp-imm.ll b/llvm/test/CodeGen/AArch64/arm64-fp-imm.ll
index 61eb67486ae3df..b835e7eda6aacb 100644
--- a/llvm/test/CodeGen/AArch64/arm64-fp-imm.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-fp-imm.ll
@@ -28,5 +28,5 @@ define fp128 @baz() {
; CHECK: adrp x[[REG:[0-9]+]], lCPI2_0 at PAGE
; CHECK: ldr q0, [x[[REG]], lCPI2_0 at PAGEOFF]
; CHECK-NEXT: ret
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
diff --git a/llvm/test/CodeGen/AArch64/arm64-fp128.ll b/llvm/test/CodeGen/AArch64/arm64-fp128.ll
index 7eb26096ed1566..cc683c6529c329 100644
--- a/llvm/test/CodeGen/AArch64/arm64-fp128.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-fp128.ll
@@ -398,7 +398,7 @@ define fp128 @test_neg_sub(fp128 %in) {
; CHECK-NEXT: strb w8, [sp, #15]
; CHECK-NEXT: ldr q0, [sp], #16
; CHECK-NEXT: ret
- %ret = fsub fp128 0xL00000000000000008000000000000000, %in
+ %ret = fsub fp128 f0x80000000000000000000000000000000, %in
ret fp128 %ret
}
diff --git a/llvm/test/CodeGen/AArch64/bf16-imm.ll b/llvm/test/CodeGen/AArch64/bf16-imm.ll
index 450bf286d8d783..15a2b9f190f449 100644
--- a/llvm/test/CodeGen/AArch64/bf16-imm.ll
+++ b/llvm/test/CodeGen/AArch64/bf16-imm.ll
@@ -8,7 +8,7 @@ define bfloat @Const0() {
; CHECK-NEXT: movi d0, #0000000000000000
; CHECK-NEXT: ret
entry:
- ret bfloat 0xR0000
+ ret bfloat f0x0000
}
define bfloat @Const1() {
@@ -23,7 +23,7 @@ define bfloat @Const1() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI1_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR3C00
+ ret bfloat f0x3C00
}
define bfloat @Const2() {
@@ -38,7 +38,7 @@ define bfloat @Const2() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI2_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR3000
+ ret bfloat f0x3000
}
define bfloat @Const3() {
@@ -53,7 +53,7 @@ define bfloat @Const3() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI3_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR4F80
+ ret bfloat f0x4F80
}
define bfloat @Const4() {
@@ -68,7 +68,7 @@ define bfloat @Const4() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI4_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR4FC0
+ ret bfloat f0x4FC0
}
define bfloat @Const5() {
@@ -84,7 +84,7 @@ define bfloat @Const5() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI5_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR2FF0
+ ret bfloat f0x2FF0
}
define bfloat @Const6() {
@@ -100,7 +100,7 @@ define bfloat @Const6() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI6_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR4FC1
+ ret bfloat f0x4FC1
}
define bfloat @Const7() {
@@ -116,6 +116,6 @@ define bfloat @Const7() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI7_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret bfloat 0xR5000
+ ret bfloat f0x5000
}
diff --git a/llvm/test/CodeGen/AArch64/bf16-instructions.ll b/llvm/test/CodeGen/AArch64/bf16-instructions.ll
index 33997614598c3a..c1d092011bbc80 100644
--- a/llvm/test/CodeGen/AArch64/bf16-instructions.ll
+++ b/llvm/test/CodeGen/AArch64/bf16-instructions.ll
@@ -656,10 +656,10 @@ define void @test_fccmp(bfloat %in, ptr %out) {
; CHECK-NEXT: fcsel s0, s0, s1, gt
; CHECK-NEXT: str h0, [x0]
; CHECK-NEXT: ret
- %cmp1 = fcmp ogt bfloat %in, 0xR4800
- %cmp2 = fcmp olt bfloat %in, 0xR4500
+ %cmp1 = fcmp ogt bfloat %in, f0x4800
+ %cmp2 = fcmp olt bfloat %in, f0x4500
%cond = and i1 %cmp1, %cmp2
- %result = select i1 %cond, bfloat %in, bfloat 0xR4500
+ %result = select i1 %cond, bfloat %in, bfloat f0x4500
store bfloat %result, ptr %out
ret void
}
diff --git a/llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll b/llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll
index 9b6e19eba3f4e6..8e6bd346b61d1c 100644
--- a/llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll
+++ b/llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll
@@ -38,7 +38,7 @@ define <4 x bfloat> @build_h4(<4 x bfloat> %a) {
; CHECK-NEXT: dup v0.4h, w8
; CHECK-NEXT: ret
entry:
- ret <4 x bfloat> <bfloat 0xR3CCD, bfloat 0xR3CCD, bfloat 0xR3CCD, bfloat 0xR3CCD>
+ ret <4 x bfloat> <bfloat f0x3CCD, bfloat f0x3CCD, bfloat f0x3CCD, bfloat f0x3CCD>
}
diff --git a/llvm/test/CodeGen/AArch64/bf16.ll b/llvm/test/CodeGen/AArch64/bf16.ll
index d3911ae4c0339e..3e50a75bef2944 100644
--- a/llvm/test/CodeGen/AArch64/bf16.ll
+++ b/llvm/test/CodeGen/AArch64/bf16.ll
@@ -69,7 +69,7 @@ define <8 x bfloat> @test_build_vector_const() {
; CHECK-LABEL: test_build_vector_const:
; CHECK: mov [[TMP:w[0-9]+]], #16256
; CHECK: dup v0.8h, [[TMP]]
- ret <8 x bfloat> <bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80, bfloat 0xR3F80>
+ ret <8 x bfloat> <bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80, bfloat f0x3F80>
}
define { bfloat, ptr } @test_store_post(bfloat %val, ptr %ptr) {
diff --git a/llvm/test/CodeGen/AArch64/f16-imm.ll b/llvm/test/CodeGen/AArch64/f16-imm.ll
index 58793bf19f3a61..63b5e2bbdb256c 100644
--- a/llvm/test/CodeGen/AArch64/f16-imm.ll
+++ b/llvm/test/CodeGen/AArch64/f16-imm.ll
@@ -19,7 +19,7 @@ define half @Const0() {
; CHECK-NOFP16-NEXT: movi d0, #0000000000000000
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH0000
+ ret half f0x0000
}
define half @Const1() {
@@ -34,7 +34,7 @@ define half @Const1() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI1_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH3C00
+ ret half f0x3C00
}
define half @Const2() {
@@ -49,7 +49,7 @@ define half @Const2() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI2_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH3000
+ ret half f0x3000
}
define half @Const3() {
@@ -64,7 +64,7 @@ define half @Const3() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI3_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH4F80
+ ret half f0x4F80
}
define half @Const4() {
@@ -79,7 +79,7 @@ define half @Const4() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI4_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH4FC0
+ ret half f0x4FC0
}
define half @Const5() {
@@ -95,7 +95,7 @@ define half @Const5() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI5_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH2FF0
+ ret half f0x2FF0
}
define half @Const6() {
@@ -111,7 +111,7 @@ define half @Const6() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI6_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH4FC1
+ ret half f0x4FC1
}
define half @Const7() {
@@ -127,6 +127,6 @@ define half @Const7() {
; CHECK-NOFP16-NEXT: ldr h0, [x8, :lo12:.LCPI7_0]
; CHECK-NOFP16-NEXT: ret
entry:
- ret half 0xH5000
+ ret half f0x5000
}
diff --git a/llvm/test/CodeGen/AArch64/f16-instructions.ll b/llvm/test/CodeGen/AArch64/f16-instructions.ll
index 5460a376931a55..34126d954e470b 100644
--- a/llvm/test/CodeGen/AArch64/f16-instructions.ll
+++ b/llvm/test/CodeGen/AArch64/f16-instructions.ll
@@ -812,10 +812,10 @@ define void @test_fccmp(half %in, ptr %out) {
; CHECK-FP16-GI-NEXT: csel w8, w8, w9, gt
; CHECK-FP16-GI-NEXT: strh w8, [x0]
; CHECK-FP16-GI-NEXT: ret
- %cmp1 = fcmp ogt half %in, 0xH4800
- %cmp2 = fcmp olt half %in, 0xH4500
+ %cmp1 = fcmp ogt half %in, f0x4800
+ %cmp2 = fcmp olt half %in, f0x4500
%cond = and i1 %cmp1, %cmp2
- %result = select i1 %cond, half %in, half 0xH4500
+ %result = select i1 %cond, half %in, half f0x4500
store half %result, ptr %out
ret void
}
diff --git a/llvm/test/CodeGen/AArch64/fcopysign-noneon.ll b/llvm/test/CodeGen/AArch64/fcopysign-noneon.ll
index b9713b57cef681..6c2870a04c4483 100644
--- a/llvm/test/CodeGen/AArch64/fcopysign-noneon.ll
+++ b/llvm/test/CodeGen/AArch64/fcopysign-noneon.ll
@@ -48,7 +48,7 @@ define fp128 @copysign0() {
entry:
%v = load double, ptr @val_double, align 8
%conv = fpext double %v to fp128
- %call = tail call fp128 @llvm.copysign.f128(fp128 0xL00000000000000007FFF000000000000, fp128 %conv) #2
+ %call = tail call fp128 @llvm.copysign.f128(fp128 f0x7FFF0000000000000000000000000000, fp128 %conv) #2
ret fp128 %call
}
diff --git a/llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll b/llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll
index ae94a9d004f159..5cbe9f7f52e87c 100644
--- a/llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll
+++ b/llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll
@@ -22,7 +22,7 @@ entry:
; CHECK-COMMON-LABEL: build_h4:
; CHECK-COMMON: mov [[GPR:w[0-9]+]], #15565
; CHECK-COMMON-NEXT: dup v0.4h, [[GPR]]
- ret <4 x half> <half 0xH3CCD, half 0xH3CCD, half 0xH3CCD, half 0xH3CCD>
+ ret <4 x half> <half f0x3CCD, half f0x3CCD, half f0x3CCD, half f0x3CCD>
}
diff --git a/llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll b/llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll
index ef947dc6c05bd2..5b464306375cd3 100644
--- a/llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll
+++ b/llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll
@@ -8,7 +8,7 @@ define void @nvcast_v2i32(ptr %a) #0 {
; CHECK-NEXT: movi v0.2s, #171, lsl #16
; CHECK-NEXT: str d0, [x0]
; CHECK-NEXT: ret
- store volatile <4 x half> <half 0xH0000, half 0xH00AB, half 0xH0000, half 0xH00AB>, ptr %a
+ store volatile <4 x half> <half f0x0000, half f0x00AB, half f0x0000, half f0x00AB>, ptr %a
ret void
}
@@ -19,7 +19,7 @@ define void @nvcast_v4i16(ptr %a) #0 {
; CHECK-NEXT: movi v0.4h, #171
; CHECK-NEXT: str d0, [x0]
; CHECK-NEXT: ret
- store volatile <4 x half> <half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB>, ptr %a
+ store volatile <4 x half> <half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB>, ptr %a
ret void
}
@@ -30,7 +30,7 @@ define void @nvcast_v8i8(ptr %a) #0 {
; CHECK-NEXT: movi v0.8b, #171
; CHECK-NEXT: str d0, [x0]
; CHECK-NEXT: ret
- store volatile <4 x half> <half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB>, ptr %a
+ store volatile <4 x half> <half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB>, ptr %a
ret void
}
@@ -52,7 +52,7 @@ define void @nvcast_v4i32(ptr %a) #0 {
; CHECK-NEXT: movi v0.4s, #171, lsl #16
; CHECK-NEXT: str q0, [x0]
; CHECK-NEXT: ret
- store volatile <8 x half> <half 0xH0000, half 0xH00AB, half 0xH0000, half 0xH00AB, half 0xH0000, half 0xH00AB, half 0xH0000, half 0xH00AB>, ptr %a
+ store volatile <8 x half> <half f0x0000, half f0x00AB, half f0x0000, half f0x00AB, half f0x0000, half f0x00AB, half f0x0000, half f0x00AB>, ptr %a
ret void
}
@@ -63,7 +63,7 @@ define void @nvcast_v8i16(ptr %a) #0 {
; CHECK-NEXT: movi v0.8h, #171
; CHECK-NEXT: str q0, [x0]
; CHECK-NEXT: ret
- store volatile <8 x half> <half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB, half 0xH00AB>, ptr %a
+ store volatile <8 x half> <half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB, half f0x00AB>, ptr %a
ret void
}
@@ -74,7 +74,7 @@ define void @nvcast_v16i8(ptr %a) #0 {
; CHECK-NEXT: movi v0.16b, #171
; CHECK-NEXT: str q0, [x0]
; CHECK-NEXT: ret
- store volatile <8 x half> <half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB, half 0xHABAB>, ptr %a
+ store volatile <8 x half> <half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB, half f0xABAB>, ptr %a
ret void
}
diff --git a/llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll b/llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll
index 368683e2b93af4..e42f0b2825c6a6 100644
--- a/llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll
+++ b/llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll
@@ -183,7 +183,7 @@ define <4 x half> @t_vfms_lane_f16(<4 x half> %a, <4 x half> %b, <4 x half> %c,
; CHECK-NEXT: fmls v0.4h, v1.4h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <4 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <4 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%lane1 = shufflevector <4 x half> %c, <4 x half> undef, <4 x i32> zeroinitializer
%fmla3 = tail call <4 x half> @llvm.fma.v4f16(<4 x half> %sub, <4 x half> %lane1, <4 x half> %a)
ret <4 x half> %fmla3
@@ -196,7 +196,7 @@ define <8 x half> @t_vfmsq_lane_f16(<8 x half> %a, <8 x half> %b, <4 x half> %c,
; CHECK-NEXT: fmls v0.8h, v1.8h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%lane1 = shufflevector <4 x half> %c, <4 x half> undef, <8 x i32> zeroinitializer
%fmla3 = tail call <8 x half> @llvm.fma.v8f16(<8 x half> %sub, <8 x half> %lane1, <8 x half> %a)
ret <8 x half> %fmla3
@@ -208,7 +208,7 @@ define <4 x half> @t_vfms_laneq_f16(<4 x half> %a, <4 x half> %b, <8 x half> %c,
; CHECK-NEXT: fmls v0.4h, v1.4h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <4 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <4 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%lane1 = shufflevector <8 x half> %c, <8 x half> undef, <4 x i32> zeroinitializer
%0 = tail call <4 x half> @llvm.fma.v4f16(<4 x half> %lane1, <4 x half> %sub, <4 x half> %a)
ret <4 x half> %0
@@ -220,7 +220,7 @@ define <8 x half> @t_vfmsq_laneq_f16(<8 x half> %a, <8 x half> %b, <8 x half> %c
; CHECK-NEXT: fmls v0.8h, v1.8h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%lane1 = shufflevector <8 x half> %c, <8 x half> undef, <8 x i32> zeroinitializer
%0 = tail call <8 x half> @llvm.fma.v8f16(<8 x half> %lane1, <8 x half> %sub, <8 x half> %a)
ret <8 x half> %0
@@ -233,7 +233,7 @@ define <4 x half> @t_vfms_n_f16(<4 x half> %a, <4 x half> %b, half %c) {
; CHECK-NEXT: fmls v0.4h, v1.4h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <4 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <4 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%vecinit = insertelement <4 x half> undef, half %c, i32 0
%vecinit3 = shufflevector <4 x half> %vecinit, <4 x half> undef, <4 x i32> zeroinitializer
%0 = tail call <4 x half> @llvm.fma.v4f16(<4 x half> %sub, <4 x half> %vecinit3, <4 x half> %a) #4
@@ -247,7 +247,7 @@ define <8 x half> @t_vfmsq_n_f16(<8 x half> %a, <8 x half> %b, half %c) {
; CHECK-NEXT: fmls v0.8h, v1.8h, v2.h[0]
; CHECK-NEXT: ret
entry:
- %sub = fsub <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub = fsub <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%vecinit = insertelement <8 x half> undef, half %c, i32 0
%vecinit7 = shufflevector <8 x half> %vecinit, <8 x half> undef, <8 x i32> zeroinitializer
%0 = tail call <8 x half> @llvm.fma.v8f16(<8 x half> %sub, <8 x half> %vecinit7, <8 x half> %a) #4
@@ -261,7 +261,7 @@ define half @t_vfmsh_lane_f16_0(half %a, half %b, <4 x half> %c, i32 %lane) {
; CHECK-NEXT: fmsub h0, h2, h1, h0
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <4 x half> %c, i32 0
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -274,7 +274,7 @@ define half @t_vfmsh_lane_f16_0_swap(half %a, half %b, <4 x half> %c, i32 %lane)
; CHECK-NEXT: fmsub h0, h2, h1, h0
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <4 x half> %c, i32 0
%1 = tail call half @llvm.fma.f16(half %extract, half %0, half %a)
ret half %1
@@ -287,7 +287,7 @@ define half @t_vfmsh_lane_f16_3(half %a, half %b, <4 x half> %c, i32 %lane) {
; CHECK-NEXT: fmls h0, h1, v2.h[3]
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <4 x half> %c, i32 3
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -299,7 +299,7 @@ define half @t_vfmsh_laneq_f16_0(half %a, half %b, <8 x half> %c, i32 %lane) {
; CHECK-NEXT: fmsub h0, h2, h1, h0
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <8 x half> %c, i32 0
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -313,7 +313,7 @@ define half @t_vfmsh_lane_f16_0_3(half %a, <4 x half> %c, i32 %lane) {
; CHECK-NEXT: ret
entry:
%b = extractelement <4 x half> %c, i32 0
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <4 x half> %c, i32 3
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -325,7 +325,7 @@ define half @t_vfmsh_laneq_f16_0_swap(half %a, half %b, <8 x half> %c, i32 %lane
; CHECK-NEXT: fmsub h0, h2, h1, h0
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <8 x half> %c, i32 0
%1 = tail call half @llvm.fma.f16(half %extract, half %0, half %a)
ret half %1
@@ -337,7 +337,7 @@ define half @t_vfmsh_laneq_f16_7(half %a, half %b, <8 x half> %c, i32 %lane) {
; CHECK-NEXT: fmls h0, h1, v2.h[7]
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <8 x half> %c, i32 7
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -571,7 +571,7 @@ define half @t_vfmsh_lane3_f16(half %a, half %b, <4 x half> %c) {
; CHECK-NEXT: fmls h0, h1, v2.h[3]
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <4 x half> %c, i32 3
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
@@ -583,7 +583,7 @@ define half @t_vfmsh_laneq7_f16(half %a, half %b, <8 x half> %c) {
; CHECK-NEXT: fmls h0, h1, v2.h[7]
; CHECK-NEXT: ret
entry:
- %0 = fsub half 0xH8000, %b
+ %0 = fsub half f0x8000, %b
%extract = extractelement <8 x half> %c, i32 7
%1 = tail call half @llvm.fma.f16(half %0, half %extract, half %a)
ret half %1
diff --git a/llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll b/llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll
index 40d2d636b94bb3..a115cd7155adfc 100644
--- a/llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll
+++ b/llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll
@@ -32,7 +32,7 @@ define dso_local i16 @t2(half %a) {
; CHECK-NEXT: csetm w0, eq
; CHECK-NEXT: ret
entry:
- %0 = fcmp oeq half %a, 0xH0000
+ %0 = fcmp oeq half %a, f0x0000
%vceqz = sext i1 %0 to i16
ret i16 %vceqz
}
@@ -44,7 +44,7 @@ define dso_local i16 @t3(half %a) {
; CHECK-NEXT: csetm w0, ge
; CHECK-NEXT: ret
entry:
- %0 = fcmp oge half %a, 0xH0000
+ %0 = fcmp oge half %a, f0x0000
%vcgez = sext i1 %0 to i16
ret i16 %vcgez
}
@@ -56,7 +56,7 @@ define dso_local i16 @t4(half %a) {
; CHECK-NEXT: csetm w0, gt
; CHECK-NEXT: ret
entry:
- %0 = fcmp ogt half %a, 0xH0000
+ %0 = fcmp ogt half %a, f0x0000
%vcgtz = sext i1 %0 to i16
ret i16 %vcgtz
}
@@ -68,7 +68,7 @@ define dso_local i16 @t5(half %a) {
; CHECK-NEXT: csetm w0, ls
; CHECK-NEXT: ret
entry:
- %0 = fcmp ole half %a, 0xH0000
+ %0 = fcmp ole half %a, f0x0000
%vclez = sext i1 %0 to i16
ret i16 %vclez
}
@@ -80,7 +80,7 @@ define dso_local i16 @t6(half %a) {
; CHECK-NEXT: csetm w0, mi
; CHECK-NEXT: ret
entry:
- %0 = fcmp olt half %a, 0xH0000
+ %0 = fcmp olt half %a, f0x0000
%vcltz = sext i1 %0 to i16
ret i16 %vcltz
}
diff --git a/llvm/test/CodeGen/AArch64/half.ll b/llvm/test/CodeGen/AArch64/half.ll
index bb802033e05fc6..b58ef72d461f60 100644
--- a/llvm/test/CodeGen/AArch64/half.ll
+++ b/llvm/test/CodeGen/AArch64/half.ll
@@ -115,8 +115,8 @@ define i16 @test_fccmp(i1 %a, i16 %in) {
; CHECK-NEXT: cinc w0, w8, pl
; CHECK-NEXT: ret
%f16 = bitcast i16 %in to half
- %cmp0 = fcmp ogt half 0xH3333, %f16
- %cmp1 = fcmp ogt half 0xH2222, %f16
+ %cmp0 = fcmp ogt half f0x3333, %f16
+ %cmp1 = fcmp ogt half f0x2222, %f16
%x = select i1 %cmp0, i16 0, i16 1
%or = or i1 %cmp1, %cmp0
%y = select i1 %or, i16 4, i16 1
diff --git a/llvm/test/CodeGen/AArch64/isinf.ll b/llvm/test/CodeGen/AArch64/isinf.ll
index e68539bcf07d9c..2668995e2f886c 100644
--- a/llvm/test/CodeGen/AArch64/isinf.ll
+++ b/llvm/test/CodeGen/AArch64/isinf.ll
@@ -17,7 +17,7 @@ define i32 @replace_isinf_call_f16(half %x) {
; CHECK-NEXT: cset w0, eq
; CHECK-NEXT: ret
%abs = tail call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %abs, 0xH7C00
+ %cmpinf = fcmp oeq half %abs, f0x7C00
%ret = zext i1 %cmpinf to i32
ret i32 %ret
}
@@ -68,7 +68,7 @@ define i32 @replace_isinf_call_f128(fp128 %x) {
; CHECK-NEXT: cset w0, eq
; CHECK-NEXT: ret
%abs = tail call fp128 @llvm.fabs.f128(fp128 %x)
- %cmpinf = fcmp oeq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %cmpinf = fcmp oeq fp128 %abs, f0x7FFF0000000000000000000000000000
%ret = zext i1 %cmpinf to i32
ret i32 %ret
}
diff --git a/llvm/test/CodeGen/AArch64/mattr-all.ll b/llvm/test/CodeGen/AArch64/mattr-all.ll
index 1da37616c0cb13..dd8bca9fe39d7d 100644
--- a/llvm/test/CodeGen/AArch64/mattr-all.ll
+++ b/llvm/test/CodeGen/AArch64/mattr-all.ll
@@ -8,5 +8,5 @@ define half @bf16() nounwind {
; CHECK: // %bb.0:
; CHECK-NEXT: movi d0, #0000000000000000
; CHECK-NEXT: ret
- ret half 0xH0000
+ ret half f0x0000
}
diff --git a/llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll b/llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll
index 6607f9c3b368e9..1d8fa5376818b3 100644
--- a/llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll
+++ b/llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll
@@ -662,7 +662,7 @@ define <vscale x 8 x half> @fadd_nxv8f16_x(<vscale x 8 x half> %x, <vscale x 8 x
; CHECK-NEXT: ret
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half 0xH8000)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half f0x8000)
%b = fadd <vscale x 8 x half> %a, %x
ret <vscale x 8 x half> %b
}
@@ -752,7 +752,7 @@ define <vscale x 8 x half> @fmul_nxv8f16_x(<vscale x 8 x half> %x, <vscale x 8 x
; CHECK-NEXT: ret
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half 0xH3C00)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half f0x3C00)
%b = fmul <vscale x 8 x half> %a, %x
ret <vscale x 8 x half> %b
}
@@ -799,7 +799,7 @@ define <vscale x 8 x half> @fdiv_nxv8f16_x(<vscale x 8 x half> %x, <vscale x 8 x
; CHECK-NEXT: ret
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half 0xH3C00)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %y, <vscale x 8 x half> splat (half f0x3C00)
%b = fdiv <vscale x 8 x half> %x, %a
ret <vscale x 8 x half> %b
}
@@ -847,7 +847,7 @@ define <vscale x 8 x half> @fma_nxv8f16_x(<vscale x 8 x half> %x, <vscale x 8 x
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
%m = fmul fast <vscale x 8 x half> %y, %z
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %m, <vscale x 8 x half> splat (half 0xH8000)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %m, <vscale x 8 x half> splat (half f0x8000)
%b = fadd fast <vscale x 8 x half> %a, %x
ret <vscale x 8 x half> %b
}
@@ -1563,7 +1563,7 @@ define <vscale x 8 x half> @fadd_nxv8f16_y(<vscale x 8 x half> %x, <vscale x 8 x
; CHECK-NEXT: ret
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %x, <vscale x 8 x half> splat (half 0xH8000)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %x, <vscale x 8 x half> splat (half f0x8000)
%b = fadd <vscale x 8 x half> %a, %y
ret <vscale x 8 x half> %b
}
@@ -1659,7 +1659,7 @@ define <vscale x 8 x half> @fmul_nxv8f16_y(<vscale x 8 x half> %x, <vscale x 8 x
; CHECK-NEXT: ret
entry:
%c = fcmp ugt <vscale x 8 x half> %n, zeroinitializer
- %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %x, <vscale x 8 x half> splat (half 0xH3C00)
+ %a = select <vscale x 8 x i1> %c, <vscale x 8 x half> %x, <vscale x 8 x half> splat (half f0x3C00)
%b = fmul <vscale x 8 x half> %a, %y
ret <vscale x 8 x half> %b
}
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll b/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll
index 215b6e086591dc..634b63f257dd9c 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll
@@ -85,7 +85,7 @@ define fp128 @test_v1f128_neutral(<1 x fp128> %a) nounwind {
; CHECK-LABEL: test_v1f128_neutral:
; CHECK: // %bb.0:
; CHECK-NEXT: ret
- %b = call fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 0xL00000000000000008000000000000000, <1 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 f0x80000000000000000000000000000000, <1 x fp128> %a)
ret fp128 %b
}
@@ -159,7 +159,7 @@ define fp128 @test_v2f128_neutral(<2 x fp128> %a) nounwind {
; CHECK-LABEL: test_v2f128_neutral:
; CHECK: // %bb.0:
; CHECK-NEXT: b __addtf3
- %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 0xL00000000000000008000000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 f0x80000000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll
index a2e5a8a1b4c46f..bdbc36286072f8 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll
@@ -41,7 +41,7 @@ define fp128 @test_v1f128(<1 x fp128> %a) nounwind {
; CHECK-LABEL: test_v1f128:
; CHECK: // %bb.0:
; CHECK-NEXT: ret
- %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 0xL00000000000000008000000000000000, <1 x fp128> %a)
+ %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 f0x80000000000000000000000000000000, <1 x fp128> %a)
ret fp128 %b
}
@@ -82,7 +82,7 @@ define fp128 @test_v2f128(<2 x fp128> %a) nounwind {
; CHECK-LABEL: test_v2f128:
; CHECK: // %bb.0:
; CHECK-NEXT: b __addtf3
- %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 0xL00000000000000008000000000000000, <2 x fp128> %a)
+ %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 f0x80000000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir
index 00050157e97990..f2bf7f43091a37 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir
@@ -11,13 +11,13 @@ body: |
; GCN: liveins: $vgpr0
; GCN-NEXT: {{ $}}
; GCN-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; GCN-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; GCN-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; GCN-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; GCN-NEXT: [[SUB:%[0-9]+]]:_(<2 x s16>) = G_SUB [[COPY]], [[BUILD_VECTOR]]
; GCN-NEXT: $vgpr0 = COPY [[SUB]](<2 x s16>)
; GCN-NEXT: SI_RETURN implicit $vgpr0
%0:_(<2 x s16>) = COPY $vgpr0
- %2:_(s16) = G_FCONSTANT half 0xH4200
+ %2:_(s16) = G_FCONSTANT half f0x4200
%1:_(<2 x s16>) = G_BUILD_VECTOR %2(s16), %2(s16)
%3:_(<2 x s16>) = G_SUB %0, %1
$vgpr0 = COPY %3(<2 x s16>)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir
index 020761352148f2..746808c9a95127 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir
@@ -243,14 +243,14 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: %two:_(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: %two:_(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: %two_s32:_(s32) = G_ANYEXT %two(s16)
; CHECK-NEXT: %two_splat:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %two_s32(s32), %two_s32(s32)
- ; CHECK-NEXT: %zero:_(s16) = G_FCONSTANT half 0xH0000
+ ; CHECK-NEXT: %zero:_(s16) = G_FCONSTANT half f0x0000
; CHECK-NEXT: %zero_s32:_(s32) = G_ANYEXT %zero(s16)
; CHECK-NEXT: %undef:_(s32) = G_IMPLICIT_DEF
; CHECK-NEXT: %zero_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %zero_s32(s32), %undef(s32)
- ; CHECK-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: %one_s32:_(s32) = G_ANYEXT %one(s16)
; CHECK-NEXT: %one_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %one_s32(s32), %undef(s32)
; CHECK-NEXT: [[FMUL:%[0-9]+]]:_(<2 x s16>) = G_FMUL [[COPY]], %two_splat
@@ -258,14 +258,14 @@ body: |
; CHECK-NEXT: [[FMINNUM_IEEE:%[0-9]+]]:_(<2 x s16>) = G_FMINNUM_IEEE [[FMAXNUM_IEEE]], %one_undef
; CHECK-NEXT: $vgpr0 = COPY [[FMINNUM_IEEE]](<2 x s16>)
%0:_(<2 x s16>) = COPY $vgpr0
- %two:_(s16) = G_FCONSTANT half 0xH4000
+ %two:_(s16) = G_FCONSTANT half f0x4000
%two_s32:_(s32) = G_ANYEXT %two(s16)
%two_splat:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %two_s32(s32), %two_s32(s32)
- %zero:_(s16) = G_FCONSTANT half 0xH0000
+ %zero:_(s16) = G_FCONSTANT half f0x0000
%zero_s32:_(s32) = G_ANYEXT %zero(s16)
%undef:_(s32) = G_IMPLICIT_DEF
%zero_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %zero_s32(s32), %undef(s32)
- %one:_(s16) = G_FCONSTANT half 0xH3C00
+ %one:_(s16) = G_FCONSTANT half f0x3C00
%one_s32:_(s32) = G_ANYEXT %one(s16)
%one_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %one_s32(s32), %undef(s32)
%4:_(<2 x s16>) = G_FMUL %0, %two_splat
@@ -293,14 +293,14 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: %two:_(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: %two:_(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: %two_s32:_(s32) = G_ANYEXT %two(s16)
; CHECK-NEXT: %two_splat:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %two_s32(s32), %two_s32(s32)
- ; CHECK-NEXT: %snan:_(s16) = G_FCONSTANT half 0xH7C01
+ ; CHECK-NEXT: %snan:_(s16) = G_FCONSTANT half f0x7C01
; CHECK-NEXT: %snan_s32:_(s32) = G_ANYEXT %snan(s16)
; CHECK-NEXT: %undef:_(s32) = G_IMPLICIT_DEF
; CHECK-NEXT: %snan_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %snan_s32(s32), %undef(s32)
- ; CHECK-NEXT: %qnan:_(s16) = G_FCONSTANT half 0xH7E01
+ ; CHECK-NEXT: %qnan:_(s16) = G_FCONSTANT half f0x7E01
; CHECK-NEXT: %qnan_s32:_(s32) = G_ANYEXT %qnan(s16)
; CHECK-NEXT: %qnan_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %qnan_s32(s32), %undef(s32)
; CHECK-NEXT: [[FMUL:%[0-9]+]]:_(<2 x s16>) = G_FMUL [[COPY]], %two_splat
@@ -309,14 +309,14 @@ body: |
; CHECK-NEXT: [[FMINNUM_IEEE:%[0-9]+]]:_(<2 x s16>) = G_FMINNUM_IEEE [[FMAXNUM_IEEE]], %qnan_undef
; CHECK-NEXT: $vgpr0 = COPY [[FMINNUM_IEEE]](<2 x s16>)
%0:_(<2 x s16>) = COPY $vgpr0
- %two:_(s16) = G_FCONSTANT half 0xH4000
+ %two:_(s16) = G_FCONSTANT half f0x4000
%two_s32:_(s32) = G_ANYEXT %two(s16)
%two_splat:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %two_s32(s32), %two_s32(s32)
- %snan:_(s16) = G_FCONSTANT half 0xH7C01
+ %snan:_(s16) = G_FCONSTANT half f0x7C01
%snan_s32:_(s32) = G_ANYEXT %snan(s16)
%undef:_(s32) = G_IMPLICIT_DEF
%snan_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %snan_s32(s32), %undef(s32)
- %qnan:_(s16) = G_FCONSTANT half 0xH7E01
+ %qnan:_(s16) = G_FCONSTANT half f0x7E01
%qnan_s32:_(s32) = G_ANYEXT %qnan(s16)
%qnan_undef:_(<2 x s16>) = G_BUILD_VECTOR_TRUNC %qnan_s32(s32), %undef(s32)
%4:_(<2 x s16>) = G_FMUL %0, %two_splat
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir
index 6c5339e36c77f4..c7c5fd2ebbbe05 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir
@@ -39,7 +39,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = G_FSQRT %x
- ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -66,7 +66,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = contract G_FSQRT %x
- ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -119,7 +119,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = G_FSQRT %x
- ; GCN-NEXT: %neg_one:_(s16) = G_FCONSTANT half 0xHBC00
+ ; GCN-NEXT: %neg_one:_(s16) = G_FCONSTANT half f0xBC00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %neg_one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -146,7 +146,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = contract G_FSQRT %x
- ; GCN-NEXT: %neg_one:_(s16) = G_FCONSTANT half 0xHBC00
+ ; GCN-NEXT: %neg_one:_(s16) = G_FCONSTANT half f0xBC00
; GCN-NEXT: %rsq:_(s16) = G_FDIV %neg_one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -173,7 +173,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = contract G_FSQRT %x
- ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -202,7 +202,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = G_FSQRT %x
- ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -231,7 +231,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = contract G_FSQRT %x
- ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -486,7 +486,7 @@ body: |
; GCN-NEXT: {{ $}}
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
- ; GCN-NEXT: %fract:_(s16) = G_FCONSTANT half 0xH3800
+ ; GCN-NEXT: %fract:_(s16) = G_FCONSTANT half f0x3800
; GCN-NEXT: [[INT:%[0-9]+]]:_(s16) = contract G_INTRINSIC intrinsic(@llvm.amdgcn.rsq), %x(s16)
; GCN-NEXT: %rsq:_(s16) = contract G_FMUL [[INT]], %fract
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
@@ -513,7 +513,7 @@ body: |
; GCN-NEXT: {{ $}}
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
- ; GCN-NEXT: %neg_fract:_(s16) = G_FCONSTANT half 0xHB800
+ ; GCN-NEXT: %neg_fract:_(s16) = G_FCONSTANT half f0xB800
; GCN-NEXT: [[INT:%[0-9]+]]:_(s16) = contract G_INTRINSIC intrinsic(@llvm.amdgcn.rsq), %x(s16)
; GCN-NEXT: %rsq:_(s16) = contract G_FMUL [[INT]], %neg_fract
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
@@ -541,7 +541,7 @@ body: |
; GCN-NEXT: {{ $}}
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
- ; GCN-NEXT: %ten:_(s16) = G_FCONSTANT half 0xH4900
+ ; GCN-NEXT: %ten:_(s16) = G_FCONSTANT half f0x4900
; GCN-NEXT: [[INT:%[0-9]+]]:_(s16) = contract G_INTRINSIC intrinsic(@llvm.amdgcn.rsq), %x(s16)
; GCN-NEXT: %rsq:_(s16) = contract G_FMUL [[INT]], %ten
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
@@ -568,7 +568,7 @@ body: |
; GCN-NEXT: {{ $}}
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GCN-NEXT: %x:_(s16) = G_TRUNC [[COPY]](s32)
- ; GCN-NEXT: %neg_ten:_(s16) = G_FCONSTANT half 0xHC900
+ ; GCN-NEXT: %neg_ten:_(s16) = G_FCONSTANT half f0xC900
; GCN-NEXT: [[INT:%[0-9]+]]:_(s16) = contract G_INTRINSIC intrinsic(@llvm.amdgcn.rsq), %x(s16)
; GCN-NEXT: %rsq:_(s16) = contract G_FMUL [[INT]], %neg_ten
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir
index 99170d3276cc2d..499a6be159e45b 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir
@@ -709,14 +709,14 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; CHECK-NEXT: [[FMINNUM:%[0-9]+]]:_(s16) = G_FMINNUM [[TRUNC]], [[C]]
; CHECK-NEXT: [[FNEG:%[0-9]+]]:_(s16) = G_FNEG [[FMINNUM]]
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[FNEG]](s16)
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%0:_(s32) = COPY $vgpr0
%1:_(s16) = G_TRUNC %0:_(s32)
- %2:_(s16) = G_FCONSTANT half 0xH3118
+ %2:_(s16) = G_FCONSTANT half f0x3118
%3:_(s16) = G_FMINNUM %1:_, %2:_
%4:_(s16) = G_FNEG %3:_
%5:_(s32) = G_ANYEXT %4:_(s16)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir
index 7bd51b87fbea47..7b23bac233b894 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir
@@ -37,7 +37,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; CHECK-NEXT: %input:_(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: %cst:_(s16) = G_FCONSTANT half 0xH0000
+ ; CHECK-NEXT: %cst:_(s16) = G_FCONSTANT half f0x0000
; CHECK-NEXT: %sub:_(s16) = G_FSUB %cst, %input
; CHECK-NEXT: %res:_(s32) = G_ANYEXT %sub(s16)
; CHECK-NEXT: $vgpr0 = COPY %res(s32)
@@ -225,7 +225,7 @@ body: |
; CHECK: liveins: $vgpr0_vgpr1
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: %input:_(<4 x s16>) = COPY $vgpr0_vgpr1
- ; CHECK-NEXT: %cst:_(s16) = G_FCONSTANT half 0xH0000
+ ; CHECK-NEXT: %cst:_(s16) = G_FCONSTANT half f0x0000
; CHECK-NEXT: %veccst:_(<4 x s16>) = G_BUILD_VECTOR %cst(s16), %cst(s16), %cst(s16), %cst(s16)
; CHECK-NEXT: %sub:_(<4 x s16>) = G_FSUB %veccst, %input
; CHECK-NEXT: $vgpr0_vgpr1 = COPY %sub(<4 x s16>)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir
index a0ba67f6df0a14..4f79df4a7e250f 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir
@@ -122,7 +122,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr0
; GCN-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = G_FSQRT [[TRUNC]]
- ; GCN-NEXT: %one:_(s16) = contract G_FCONSTANT half 0xH3C00
+ ; GCN-NEXT: %one:_(s16) = contract G_FCONSTANT half f0x3C00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
@@ -150,7 +150,7 @@ body: |
; GCN-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr0
; GCN-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; GCN-NEXT: %sqrt:_(s16) = G_FSQRT [[TRUNC]]
- ; GCN-NEXT: %one:_(s16) = contract G_FCONSTANT half 0xHBC00
+ ; GCN-NEXT: %one:_(s16) = contract G_FCONSTANT half f0xBC00
; GCN-NEXT: %rsq:_(s16) = contract G_FDIV %one, %sqrt
; GCN-NEXT: %ext:_(s32) = G_ANYEXT %rsq(s16)
; GCN-NEXT: $vgpr0 = COPY %ext(s32)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll
index 3206f8e55f44eb..ce1e252de7ef4a 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll
@@ -18,7 +18,7 @@ define <3 x bfloat> @v3bf16(<3 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT3:%[0-9]+]]:_(s32) = G_ANYEXT [[UV3]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<3 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<3 x s16>) = G_TRUNC [[BUILD_VECTOR]](<3 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<3 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<3 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<3 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2)
; GFX9-NEXT: [[UV4:%[0-9]+]]:_(s16), [[UV5:%[0-9]+]]:_(s16), [[UV6:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<3 x s16>)
@@ -46,7 +46,7 @@ define <4 x bfloat> @v4bf16(<4 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT3:%[0-9]+]]:_(s32) = G_ANYEXT [[UV3]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32), [[ANYEXT3]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<4 x s16>) = G_TRUNC [[BUILD_VECTOR]](<4 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<4 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<4 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<4 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2, 0)
; GFX9-NEXT: [[UV4:%[0-9]+]]:_(s16), [[UV5:%[0-9]+]]:_(s16), [[UV6:%[0-9]+]]:_(s16), [[UV7:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<4 x s16>)
@@ -78,7 +78,7 @@ define <5 x bfloat> @v5bf16(<5 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT5:%[0-9]+]]:_(s32) = G_ANYEXT [[UV5]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<5 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32), [[ANYEXT3]](s32), [[ANYEXT4]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<5 x s16>) = G_TRUNC [[BUILD_VECTOR]](<5 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<5 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<5 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<5 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2, 0, 4)
; GFX9-NEXT: [[UV6:%[0-9]+]]:_(s16), [[UV7:%[0-9]+]]:_(s16), [[UV8:%[0-9]+]]:_(s16), [[UV9:%[0-9]+]]:_(s16), [[UV10:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<5 x s16>)
@@ -112,7 +112,7 @@ define <6 x bfloat> @v6bf16(<6 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT5:%[0-9]+]]:_(s32) = G_ANYEXT [[UV5]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<6 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32), [[ANYEXT3]](s32), [[ANYEXT4]](s32), [[ANYEXT5]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<6 x s16>) = G_TRUNC [[BUILD_VECTOR]](<6 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<6 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<6 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<6 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2, 0, 4, 5)
; GFX9-NEXT: [[UV6:%[0-9]+]]:_(s16), [[UV7:%[0-9]+]]:_(s16), [[UV8:%[0-9]+]]:_(s16), [[UV9:%[0-9]+]]:_(s16), [[UV10:%[0-9]+]]:_(s16), [[UV11:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<6 x s16>)
@@ -150,7 +150,7 @@ define <7 x bfloat> @v7bf16(<7 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT7:%[0-9]+]]:_(s32) = G_ANYEXT [[UV7]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<7 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32), [[ANYEXT3]](s32), [[ANYEXT4]](s32), [[ANYEXT5]](s32), [[ANYEXT6]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<7 x s16>) = G_TRUNC [[BUILD_VECTOR]](<7 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<7 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<7 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<7 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2, 0, 4, 5, 6)
; GFX9-NEXT: [[UV8:%[0-9]+]]:_(s16), [[UV9:%[0-9]+]]:_(s16), [[UV10:%[0-9]+]]:_(s16), [[UV11:%[0-9]+]]:_(s16), [[UV12:%[0-9]+]]:_(s16), [[UV13:%[0-9]+]]:_(s16), [[UV14:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<7 x s16>)
@@ -190,7 +190,7 @@ define <8 x bfloat> @v8bf16(<8 x bfloat> %arg0) {
; GFX9-NEXT: [[ANYEXT7:%[0-9]+]]:_(s32) = G_ANYEXT [[UV7]](s16)
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<8 x s32>) = G_BUILD_VECTOR [[ANYEXT]](s32), [[ANYEXT1]](s32), [[ANYEXT2]](s32), [[ANYEXT3]](s32), [[ANYEXT4]](s32), [[ANYEXT5]](s32), [[ANYEXT6]](s32), [[ANYEXT7]](s32)
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(<8 x s16>) = G_TRUNC [[BUILD_VECTOR]](<8 x s32>)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat 0xR0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT bfloat f0x0000
; GFX9-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[SHUF:%[0-9]+]]:_(<8 x s16>) = G_SHUFFLE_VECTOR [[TRUNC]](<8 x s16>), [[BUILD_VECTOR1]], shufflemask(3, 1, 2, 0, 4, 5, 6, 7)
; GFX9-NEXT: [[UV8:%[0-9]+]]:_(s16), [[UV9:%[0-9]+]]:_(s16), [[UV10:%[0-9]+]]:_(s16), [[UV11:%[0-9]+]]:_(s16), [[UV12:%[0-9]+]]:_(s16), [[UV13:%[0-9]+]]:_(s16), [[UV14:%[0-9]+]]:_(s16), [[UV15:%[0-9]+]]:_(s16) = G_UNMERGE_VALUES [[SHUF]](<8 x s16>)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll
index be0c9e2a602faf..0eb28abce0c212 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll
@@ -55,7 +55,7 @@ define <2 x half> @test_atomicrmw_fadd_vector(ptr addrspace(3) %addr) {
; CHECK-NEXT: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p3) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; CHECK-NEXT: [[ATOMICRMW_FADD:%[0-9]+]]:_(<2 x s16>) = G_ATOMICRMW_FADD [[COPY]](p3), [[BUILD_VECTOR]] :: (load store seq_cst (<2 x s16>) on %ir.addr, addrspace 3)
; CHECK-NEXT: $vgpr0 = COPY [[ATOMICRMW_FADD]](<2 x s16>)
@@ -71,7 +71,7 @@ define <2 x half> @test_atomicrmw_fsub_vector(ptr addrspace(3) %addr) {
; CHECK-NEXT: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p3) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 0
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(<2 x s16>) = G_LOAD [[COPY]](p3) :: (load (<2 x s16>) from %ir.addr, addrspace 3)
@@ -109,7 +109,7 @@ define <2 x half> @test_atomicrmw_fmin_vector(ptr addrspace(3) %addr) {
; CHECK-NEXT: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p3) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 0
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(<2 x s16>) = G_LOAD [[COPY]](p3) :: (load (<2 x s16>) from %ir.addr, addrspace 3)
@@ -147,7 +147,7 @@ define <2 x half> @test_atomicrmw_fmax_vector(ptr addrspace(3) %addr) {
; CHECK-NEXT: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p3) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 0
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(<2 x s16>) = G_LOAD [[COPY]](p3) :: (load (<2 x s16>) from %ir.addr, addrspace 3)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll
index 7691f4c30de04a..bfde7cde838bf6 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll
@@ -1611,7 +1611,7 @@ define amdgpu_kernel void @test_call_external_void_func_f16_imm() #0 {
; CHECK-NEXT: [[COPY7:%[0-9]+]]:sgpr_64 = COPY $sgpr6_sgpr7
; CHECK-NEXT: [[COPY8:%[0-9]+]]:sgpr_64 = COPY $sgpr4_sgpr5
; CHECK-NEXT: [[COPY9:%[0-9]+]]:_(p4) = COPY $sgpr8_sgpr9
- ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def $scc
; CHECK-NEXT: [[GV:%[0-9]+]]:_(p0) = G_GLOBAL_VALUE @external_void_func_f16
; CHECK-NEXT: [[COPY10:%[0-9]+]]:_(p4) = COPY [[COPY8]]
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir
index 6906ff9f5b3490..8480580e14c814 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir
@@ -31,7 +31,7 @@ body: |
bb.0:
; GCN-LABEL: name: test_fconstant_s16
- ; GCN: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; GCN: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; GCN-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[C]](s16)
; GCN-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%0:_(s16) = G_FCONSTANT half 1.0
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir
index c230edac5ddf9d..da45bb828bfbd1 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir
@@ -102,7 +102,7 @@ body: |
; VI-NEXT: {{ $}}
; VI-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; VI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
- ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[INT]](s16)
@@ -113,7 +113,7 @@ body: |
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[FMUL]](s16)
; GFX9-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[INT]](s16)
@@ -327,7 +327,7 @@ body: |
; VI-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
; VI-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST]], [[C]](s32)
; VI-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[INT]](s16)
@@ -349,7 +349,7 @@ body: |
; GFX9-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
; GFX9-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST]], [[C]](s32)
; GFX9-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
@@ -407,7 +407,7 @@ body: |
; VI-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
; VI-NEXT: [[BITCAST1:%[0-9]+]]:_(s32) = G_BITCAST [[UV1]](<2 x s16>)
; VI-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[INT]](s16)
@@ -432,7 +432,7 @@ body: |
; GFX9-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
; GFX9-NEXT: [[BITCAST1:%[0-9]+]]:_(s32) = G_BITCAST [[UV1]](<2 x s16>)
; GFX9-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
@@ -517,7 +517,7 @@ body: |
; VI-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
; VI-NEXT: [[LSHR1:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST1]], [[C]](s32)
; VI-NEXT: [[TRUNC3:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR1]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[INT]](s16)
@@ -556,7 +556,7 @@ body: |
; GFX9-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
; GFX9-NEXT: [[LSHR1:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST1]], [[C]](s32)
; GFX9-NEXT: [[TRUNC3:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR1]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.cos), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir
index 1f9c059c2ac60b..d1c03a950bfe7e 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir
@@ -2145,7 +2145,7 @@ body: |
; SI-LABEL: name: test_fdiv_s16_constant_one_rcp
; SI: liveins: $vgpr0
; SI-NEXT: {{ $}}
- ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
+ ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
; SI-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; SI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; SI-NEXT: [[FPEXT:%[0-9]+]]:_(s32) = G_FPEXT [[C]](s16)
@@ -2219,7 +2219,7 @@ body: |
; SI-LABEL: name: test_fdiv_s16_constant_negative_one_rcp
; SI: liveins: $vgpr0
; SI-NEXT: {{ $}}
- ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xHBC00
+ ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0xBC00
; SI-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; SI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; SI-NEXT: [[FPEXT:%[0-9]+]]:_(s32) = G_FPEXT [[C]](s16)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir
index 78bed9e19c65e9..36c085291fbdab 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir
@@ -1032,7 +1032,7 @@ body: |
; SI: liveins: $vgpr0, $vgpr1
; SI-NEXT: {{ $}}
; SI-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; SI-NEXT: [[BITCAST:%[0-9]+]]:_(s32) = G_BITCAST [[COPY]](<2 x s16>)
; SI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST]](s32)
; SI-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
@@ -1057,7 +1057,7 @@ body: |
; VI: liveins: $vgpr0, $vgpr1
; VI-NEXT: {{ $}}
; VI-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; VI-NEXT: [[BITCAST:%[0-9]+]]:_(s32) = G_BITCAST [[COPY]](<2 x s16>)
; VI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST]](s32)
; VI-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
@@ -1080,13 +1080,13 @@ body: |
; GFX9: liveins: $vgpr0, $vgpr1
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[FCANONICALIZE:%[0-9]+]]:_(<2 x s16>) = G_FCANONICALIZE [[COPY]]
; GFX9-NEXT: [[FMAXNUM_IEEE:%[0-9]+]]:_(<2 x s16>) = G_FMAXNUM_IEEE [[FCANONICALIZE]], [[BUILD_VECTOR]]
; GFX9-NEXT: $vgpr0 = COPY [[FMAXNUM_IEEE]](<2 x s16>)
%0:_(<2 x s16>) = COPY $vgpr0
- %1:_(s16) = G_FCONSTANT half 0xH0000
+ %1:_(s16) = G_FCONSTANT half f0x0000
%2:_(<2 x s16>) = G_BUILD_VECTOR %1(s16), %1(s16)
%3:_(<2 x s16>) = G_FMAXNUM %0, %2
$vgpr0 = COPY %3
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir
index a20c2fa21eb1e2..6b4fddf9fffab8 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir
@@ -1032,7 +1032,7 @@ body: |
; SI: liveins: $vgpr0, $vgpr1
; SI-NEXT: {{ $}}
; SI-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; SI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; SI-NEXT: [[BITCAST:%[0-9]+]]:_(s32) = G_BITCAST [[COPY]](<2 x s16>)
; SI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST]](s32)
; SI-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
@@ -1057,7 +1057,7 @@ body: |
; VI: liveins: $vgpr0, $vgpr1
; VI-NEXT: {{ $}}
; VI-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; VI-NEXT: [[BITCAST:%[0-9]+]]:_(s32) = G_BITCAST [[COPY]](<2 x s16>)
; VI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST]](s32)
; VI-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
@@ -1080,13 +1080,13 @@ body: |
; GFX9: liveins: $vgpr0, $vgpr1
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(<2 x s16>) = COPY $vgpr0
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s16>) = G_BUILD_VECTOR [[C]](s16), [[C]](s16)
; GFX9-NEXT: [[FCANONICALIZE:%[0-9]+]]:_(<2 x s16>) = G_FCANONICALIZE [[COPY]]
; GFX9-NEXT: [[FMINNUM_IEEE:%[0-9]+]]:_(<2 x s16>) = G_FMINNUM_IEEE [[FCANONICALIZE]], [[BUILD_VECTOR]]
; GFX9-NEXT: $vgpr0 = COPY [[FMINNUM_IEEE]](<2 x s16>)
%0:_(<2 x s16>) = COPY $vgpr0
- %1:_(s16) = G_FCONSTANT half 0xH0000
+ %1:_(s16) = G_FCONSTANT half f0x0000
%2:_(<2 x s16>) = G_BUILD_VECTOR %1(s16), %1(s16)
%3:_(<2 x s16>) = G_FMINNUM %0, %2
$vgpr0 = COPY %3
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir
index e7a808bdd6de4d..e60f8e0ae0de9e 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir
@@ -102,7 +102,7 @@ body: |
; VI-NEXT: {{ $}}
; VI-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; VI-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
- ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[INT]](s16)
@@ -113,7 +113,7 @@ body: |
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX9-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[FMUL]](s16)
; GFX9-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[INT]](s16)
@@ -327,7 +327,7 @@ body: |
; VI-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
; VI-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST]], [[C]](s32)
; VI-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[INT]](s16)
@@ -349,7 +349,7 @@ body: |
; GFX9-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
; GFX9-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST]], [[C]](s32)
; GFX9-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
@@ -407,7 +407,7 @@ body: |
; VI-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
; VI-NEXT: [[BITCAST1:%[0-9]+]]:_(s32) = G_BITCAST [[UV1]](<2 x s16>)
; VI-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[INT]](s16)
@@ -432,7 +432,7 @@ body: |
; GFX9-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR]](s32)
; GFX9-NEXT: [[BITCAST1:%[0-9]+]]:_(s32) = G_BITCAST [[UV1]](<2 x s16>)
; GFX9-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
@@ -517,7 +517,7 @@ body: |
; VI-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
; VI-NEXT: [[LSHR1:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST1]], [[C]](s32)
; VI-NEXT: [[TRUNC3:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR1]](s32)
- ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; VI-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; VI-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; VI-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.fract), [[FMUL]](s16)
; VI-NEXT: [[INT1:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[INT]](s16)
@@ -556,7 +556,7 @@ body: |
; GFX9-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[BITCAST1]](s32)
; GFX9-NEXT: [[LSHR1:%[0-9]+]]:_(s32) = G_LSHR [[BITCAST1]], [[C]](s32)
; GFX9-NEXT: [[TRUNC3:%[0-9]+]]:_(s16) = G_TRUNC [[LSHR1]](s32)
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3118
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3118
; GFX9-NEXT: [[FMUL:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC]], [[C1]]
; GFX9-NEXT: [[INT:%[0-9]+]]:_(s16) = G_INTRINSIC intrinsic(@llvm.amdgcn.sin), [[FMUL]](s16)
; GFX9-NEXT: [[FMUL1:%[0-9]+]]:_(s16) = G_FMUL [[TRUNC1]], [[C1]]
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir
index 2a3fa6fbfdb770..3f5712b665c7f4 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir
@@ -499,12 +499,12 @@ body: |
; GFX6-NEXT: [[FADD:%[0-9]+]]:_(s32) = G_FADD [[FPEXT1]], [[FPEXT2]]
; GFX6-NEXT: [[FPTRUNC1:%[0-9]+]]:_(s16) = G_FPTRUNC [[FADD]](s32)
; GFX6-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FPTRUNC1]]
- ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX6-NEXT: [[FPEXT3:%[0-9]+]]:_(s32) = G_FPEXT [[FABS]](s16)
; GFX6-NEXT: [[FPEXT4:%[0-9]+]]:_(s32) = G_FPEXT [[C]](s16)
; GFX6-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FPEXT3]](s32), [[FPEXT4]]
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C1]], [[C2]]
; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX6-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -526,10 +526,10 @@ body: |
; GFX8-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX8-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX8-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX8-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C]]
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C1]], [[C2]]
; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX8-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -548,10 +548,10 @@ body: |
; GFX9-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX9-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX9-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX9-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX9-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C]]
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C1]], [[C2]]
; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX9-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -592,12 +592,12 @@ body: |
; GFX6-NEXT: [[FADD:%[0-9]+]]:_(s32) = G_FADD [[FPEXT1]], [[FPEXT2]]
; GFX6-NEXT: [[FPTRUNC1:%[0-9]+]]:_(s16) = G_FPTRUNC [[FADD]](s32)
; GFX6-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FPTRUNC1]]
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX6-NEXT: [[FPEXT3:%[0-9]+]]:_(s32) = G_FPEXT [[FABS]](s16)
; GFX6-NEXT: [[FPEXT4:%[0-9]+]]:_(s32) = G_FPEXT [[C1]](s16)
; GFX6-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FPEXT3]](s32), [[FPEXT4]]
- ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX6-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX6-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -647,10 +647,10 @@ body: |
; GFX8-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX8-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX8-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX8-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX8-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX8-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -686,10 +686,10 @@ body: |
; GFX9-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX9-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX9-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX9-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX9-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX9-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -739,12 +739,12 @@ body: |
; GFX6-NEXT: [[FADD:%[0-9]+]]:_(s32) = G_FADD [[FPEXT1]], [[FPEXT2]]
; GFX6-NEXT: [[FPTRUNC1:%[0-9]+]]:_(s16) = G_FPTRUNC [[FADD]](s32)
; GFX6-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FPTRUNC1]]
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX6-NEXT: [[FPEXT3:%[0-9]+]]:_(s32) = G_FPEXT [[FABS]](s16)
; GFX6-NEXT: [[FPEXT4:%[0-9]+]]:_(s32) = G_FPEXT [[C1]](s16)
; GFX6-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FPEXT3]](s32), [[FPEXT4]]
- ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX6-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX6-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -833,10 +833,10 @@ body: |
; GFX8-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX8-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX8-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX8-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX8-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX8-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -900,10 +900,10 @@ body: |
; GFX9-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX9-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX9-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX9-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX9-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX9-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -979,12 +979,12 @@ body: |
; GFX6-NEXT: [[FADD:%[0-9]+]]:_(s32) = G_FADD [[FPEXT1]], [[FPEXT2]]
; GFX6-NEXT: [[FPTRUNC1:%[0-9]+]]:_(s16) = G_FPTRUNC [[FADD]](s32)
; GFX6-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FPTRUNC1]]
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX6-NEXT: [[FPEXT3:%[0-9]+]]:_(s32) = G_FPEXT [[FABS]](s16)
; GFX6-NEXT: [[FPEXT4:%[0-9]+]]:_(s32) = G_FPEXT [[C1]](s16)
; GFX6-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FPEXT3]](s32), [[FPEXT4]]
- ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX6-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX6-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX6-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -1085,10 +1085,10 @@ body: |
; GFX8-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX8-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX8-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX8-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX8-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX8-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX8-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
@@ -1153,10 +1153,10 @@ body: |
; GFX9-NEXT: [[INTRINSIC_TRUNC:%[0-9]+]]:_(s16) = G_INTRINSIC_TRUNC [[TRUNC]]
; GFX9-NEXT: [[FSUB:%[0-9]+]]:_(s16) = G_FSUB [[TRUNC]], [[INTRINSIC_TRUNC]]
; GFX9-NEXT: [[FABS:%[0-9]+]]:_(s16) = G_FABS [[FSUB]]
- ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3800
+ ; GFX9-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3800
; GFX9-NEXT: [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(oge), [[FABS]](s16), [[C1]]
- ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX9-NEXT: [[C2:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX9-NEXT: [[C3:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX9-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[FCMP]](s1), [[C2]], [[C3]]
; GFX9-NEXT: [[C4:%[0-9]+]]:_(s16) = G_CONSTANT i16 -32768
; GFX9-NEXT: [[C5:%[0-9]+]]:_(s16) = G_CONSTANT i16 32767
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir
index 4cbdea64f1c00d..e3ed063b8248ca 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir
@@ -367,8 +367,8 @@ body: |
; GFX6-NEXT: {{ $}}
; GFX6-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX6-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[COPY]](s32)
- ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xHBC00
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0xBC00
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[TRUNC]](s1), [[C]], [[C1]]
; GFX6-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[SELECT]](s16)
; GFX6-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
@@ -377,8 +377,8 @@ body: |
; GFX8-NEXT: {{ $}}
; GFX8-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX8-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[COPY]](s32)
- ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xHBC00
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0xBC00
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[TRUNC]](s1), [[C]], [[C1]]
; GFX8-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[SELECT]](s16)
; GFX8-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir
index 65826d7658f2cd..787c8793a1184e 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir
@@ -339,8 +339,8 @@ body: |
; GFX6-NEXT: {{ $}}
; GFX6-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX6-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[COPY]](s32)
- ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX6-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX6-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX6-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[TRUNC]](s1), [[C]], [[C1]]
; GFX6-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[SELECT]](s16)
; GFX6-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
@@ -349,8 +349,8 @@ body: |
; GFX8-NEXT: {{ $}}
; GFX8-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
; GFX8-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[COPY]](s32)
- ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; GFX8-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; GFX8-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; GFX8-NEXT: [[SELECT:%[0-9]+]]:_(s16) = G_SELECT [[TRUNC]](s1), [[C]], [[C1]]
; GFX8-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[SELECT]](s16)
; GFX8-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll
index 0f60f40bd337be..1255dcf832dc75 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll
@@ -875,7 +875,7 @@ define amdgpu_ps void @wqm_deriv(<2 x float> %input, float %arg, i32 %index) {
br label %.continue1
.continue1:
- call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true) #3
+ call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true) #3
ret void
}
@@ -1175,7 +1175,7 @@ define amdgpu_ps void @wqm_deriv_loop(<2 x float> %input, float %arg, i32 %index
br i1 %loop.cond, label %.continue0, label %.return
.return:
- call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true) #3
+ call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true) #3
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir
index a97d905f2a978c..7b3e06caaadee2 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir
@@ -63,7 +63,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
; CHECK-NEXT: [[FMUL:%[0-9]+]]:vgpr(s16) = G_FMUL [[TRUNC]], [[COPY1]]
; CHECK-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_CLAMP [[FMUL]]
@@ -75,7 +75,7 @@ body: |
; GFX12-NEXT: {{ $}}
; GFX12-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; GFX12-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; GFX12-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; GFX12-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; GFX12-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
; GFX12-NEXT: [[FMUL:%[0-9]+]]:vgpr(s16) = G_FMUL [[TRUNC]], [[COPY1]]
; GFX12-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_CLAMP [[FMUL]]
@@ -83,11 +83,11 @@ body: |
; GFX12-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%10:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = G_FMUL %0, %10
- %7:sgpr(s16) = G_FCONSTANT half 0xH3C00
- %6:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %7:sgpr(s16) = G_FCONSTANT half f0x3C00
+ %6:sgpr(s16) = G_FCONSTANT half f0x0000
%11:vgpr(s16) = COPY %6(s16)
%12:vgpr(s16) = COPY %7(s16)
%5:vgpr(s16) = nnan G_AMDGPU_FMED3 %4(s16), %11(s16), %12(s16)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir
index 70fd67363648d8..951f6d5b35d94b 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir
@@ -89,7 +89,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
; CHECK-NEXT: [[FMUL:%[0-9]+]]:vgpr(s16) = G_FMUL [[TRUNC]], [[COPY1]]
; CHECK-NEXT: [[FCANONICALIZE:%[0-9]+]]:vgpr(s16) = G_FCANONICALIZE [[FMUL]]
@@ -98,14 +98,14 @@ body: |
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%12:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = G_FMUL %0, %12
- %5:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %5:sgpr(s16) = G_FCONSTANT half f0x0000
%11:vgpr(s16) = G_FCANONICALIZE %4
%13:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = G_FMAXNUM_IEEE %11, %13
- %7:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ %7:sgpr(s16) = G_FCONSTANT half f0x3C00
%14:vgpr(s16) = COPY %7(s16)
%8:vgpr(s16) = G_FMINNUM_IEEE %14, %6
%10:vgpr(s32) = G_ANYEXT %8(s16)
@@ -129,7 +129,7 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C]](s16)
; CHECK-NEXT: [[BUILD_VECTOR_TRUNC:%[0-9]+]]:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC [[ANYEXT]](s32), [[ANYEXT]](s32)
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(<2 x s16>) = COPY [[BUILD_VECTOR_TRUNC]](<2 x s16>)
@@ -137,13 +137,13 @@ body: |
; CHECK-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(<2 x s16>) = nnan G_AMDGPU_CLAMP [[FMUL]]
; CHECK-NEXT: $vgpr0 = COPY [[AMDGPU_CLAMP]](<2 x s16>)
%0:vgpr(<2 x s16>) = COPY $vgpr0
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%12:sgpr(s32) = G_ANYEXT %3(s16)
%2:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %12(s32), %12(s32)
- %6:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %6:sgpr(s16) = G_FCONSTANT half f0x0000
%13:sgpr(s32) = G_ANYEXT %6(s16)
%5:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %13(s32), %13(s32)
- %9:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ %9:sgpr(s16) = G_FCONSTANT half f0x3C00
%14:sgpr(s32) = G_ANYEXT %9(s16)
%8:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %14(s32), %14(s32)
%15:vgpr(<2 x s16>) = COPY %2(<2 x s16>)
@@ -172,7 +172,7 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C]](s16)
; CHECK-NEXT: [[BUILD_VECTOR_TRUNC:%[0-9]+]]:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC [[ANYEXT]](s32), [[ANYEXT]](s32)
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(<2 x s16>) = COPY [[BUILD_VECTOR_TRUNC]](<2 x s16>)
@@ -181,14 +181,14 @@ body: |
; CHECK-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(<2 x s16>) = G_AMDGPU_CLAMP [[FCANONICALIZE]]
; CHECK-NEXT: $vgpr0 = COPY [[AMDGPU_CLAMP]](<2 x s16>)
%0:vgpr(<2 x s16>) = COPY $vgpr0
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%17:sgpr(s32) = G_ANYEXT %3(s16)
%2:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %17(s32), %17(s32)
- %6:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %6:sgpr(s16) = G_FCONSTANT half f0x0000
%18:sgpr(s32) = G_ANYEXT %6(s16)
%19:sgpr(s32) = G_IMPLICIT_DEF
%5:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %18(s32), %19(s32)
- %10:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ %10:sgpr(s16) = G_FCONSTANT half f0x3C00
%20:sgpr(s32) = G_ANYEXT %10(s16)
%9:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %20(s32), %19(s32)
%21:vgpr(<2 x s16>) = COPY %2(<2 x s16>)
@@ -286,7 +286,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
; CHECK-NEXT: [[FMUL:%[0-9]+]]:vgpr(s16) = G_FMUL [[TRUNC]], [[COPY1]]
; CHECK-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_CLAMP [[FMUL]]
@@ -294,13 +294,13 @@ body: |
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%11:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = G_FMUL %0, %11
- %5:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ %5:sgpr(s16) = G_FCONSTANT half f0x3C00
%12:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = nnan G_FMINNUM_IEEE %4, %12
- %7:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %7:sgpr(s16) = G_FCONSTANT half f0x0000
%13:vgpr(s16) = COPY %7(s16)
%8:vgpr(s16) = nnan G_FMAXNUM_IEEE %13, %6
%10:vgpr(s32) = G_ANYEXT %8(s16)
@@ -324,7 +324,7 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C]](s16)
; CHECK-NEXT: [[BUILD_VECTOR_TRUNC:%[0-9]+]]:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC [[ANYEXT]](s32), [[ANYEXT]](s32)
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(<2 x s16>) = COPY [[BUILD_VECTOR_TRUNC]](<2 x s16>)
@@ -332,14 +332,14 @@ body: |
; CHECK-NEXT: [[AMDGPU_CLAMP:%[0-9]+]]:vgpr(<2 x s16>) = nnan G_AMDGPU_CLAMP [[FMUL]]
; CHECK-NEXT: $vgpr0 = COPY [[AMDGPU_CLAMP]](<2 x s16>)
%0:vgpr(<2 x s16>) = COPY $vgpr0
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%13:sgpr(s32) = G_ANYEXT %3(s16)
%2:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %13(s32), %13(s32)
- %6:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ %6:sgpr(s16) = G_FCONSTANT half f0x3C00
%14:sgpr(s32) = G_ANYEXT %6(s16)
%15:sgpr(s32) = G_IMPLICIT_DEF
%5:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %14(s32), %15(s32)
- %10:sgpr(s16) = G_FCONSTANT half 0xH0000
+ %10:sgpr(s16) = G_FCONSTANT half f0x0000
%16:sgpr(s32) = G_ANYEXT %10(s16)
%9:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %15(s32), %16(s32)
%17:vgpr(<2 x s16>) = COPY %2(<2 x s16>)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir
index 2f41d861000403..3ad3918c7e1d9b 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir
@@ -82,21 +82,21 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[FCANONICALIZE:%[0-9]+]]:vgpr(s16) = G_FCANONICALIZE [[TRUNC]]
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
- ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: [[COPY2:%[0-9]+]]:vgpr(s16) = COPY [[C1]](s16)
; CHECK-NEXT: [[AMDGPU_FMED3_:%[0-9]+]]:vgpr(s16) = G_AMDGPU_FMED3 [[FCANONICALIZE]], [[COPY1]], [[COPY2]]
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:vgpr(s32) = G_ANYEXT [[AMDGPU_FMED3_]](s16)
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%9:vgpr(s16) = G_FCANONICALIZE %0
%10:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = G_FMAXNUM_IEEE %9, %10
- %5:sgpr(s16) = G_FCONSTANT half 0xH4400
+ %5:sgpr(s16) = G_FCONSTANT half f0x4400
%11:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = G_FMINNUM_IEEE %11, %4
%8:vgpr(s32) = G_ANYEXT %6(s16)
@@ -121,19 +121,19 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
- ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: [[COPY2:%[0-9]+]]:vgpr(s16) = COPY [[C1]](s16)
; CHECK-NEXT: [[AMDGPU_FMED3_:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_FMED3 [[TRUNC]], [[COPY1]], [[COPY2]]
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:vgpr(s32) = G_ANYEXT [[AMDGPU_FMED3_]](s16)
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%9:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = nnan G_FMAXNUM %9, %0
- %5:sgpr(s16) = G_FCONSTANT half 0xH4400
+ %5:sgpr(s16) = G_FCONSTANT half f0x4400
%10:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = nnan G_FMINNUM %10, %4
%8:vgpr(s32) = G_ANYEXT %6(s16)
@@ -221,19 +221,19 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
- ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY2:%[0-9]+]]:vgpr(s16) = COPY [[C1]](s16)
; CHECK-NEXT: [[AMDGPU_FMED3_:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_FMED3 [[TRUNC]], [[COPY2]], [[COPY1]]
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:vgpr(s32) = G_ANYEXT [[AMDGPU_FMED3_]](s16)
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4400
+ %3:sgpr(s16) = G_FCONSTANT half f0x4400
%9:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = nnan G_FMINNUM_IEEE %0, %9
- %5:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %5:sgpr(s16) = G_FCONSTANT half f0x4000
%10:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = nnan G_FMAXNUM_IEEE %10, %4
%8:vgpr(s32) = G_ANYEXT %6(s16)
@@ -257,19 +257,19 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
; CHECK-NEXT: [[TRUNC:%[0-9]+]]:vgpr(s16) = G_TRUNC [[COPY]](s32)
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s16) = COPY [[C]](s16)
- ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[COPY2:%[0-9]+]]:vgpr(s16) = COPY [[C1]](s16)
; CHECK-NEXT: [[AMDGPU_FMED3_:%[0-9]+]]:vgpr(s16) = nnan G_AMDGPU_FMED3 [[TRUNC]], [[COPY2]], [[COPY1]]
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:vgpr(s32) = G_ANYEXT [[AMDGPU_FMED3_]](s16)
; CHECK-NEXT: $vgpr0 = COPY [[ANYEXT]](s32)
%2:vgpr(s32) = COPY $vgpr0
%0:vgpr(s16) = G_TRUNC %2(s32)
- %3:sgpr(s16) = G_FCONSTANT half 0xH4400
+ %3:sgpr(s16) = G_FCONSTANT half f0x4400
%9:vgpr(s16) = COPY %3(s16)
%4:vgpr(s16) = nnan G_FMINNUM %9, %0
- %5:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %5:sgpr(s16) = G_FCONSTANT half f0x4000
%10:vgpr(s16) = COPY %5(s16)
%6:vgpr(s16) = nnan G_FMAXNUM %10, %4
%8:vgpr(s32) = G_ANYEXT %6(s16)
@@ -426,10 +426,10 @@ body: |
; CHECK: liveins: $vgpr0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(<2 x s16>) = COPY $vgpr0
- ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4000
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4000
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C]](s16)
; CHECK-NEXT: [[BUILD_VECTOR_TRUNC:%[0-9]+]]:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC [[ANYEXT]](s32), [[ANYEXT]](s32)
- ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH4400
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x4400
; CHECK-NEXT: [[ANYEXT1:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C1]](s16)
; CHECK-NEXT: [[BUILD_VECTOR_TRUNC1:%[0-9]+]]:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC [[ANYEXT1]](s32), [[ANYEXT1]](s32)
; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(<2 x s16>) = COPY [[BUILD_VECTOR_TRUNC]](<2 x s16>)
@@ -438,10 +438,10 @@ body: |
; CHECK-NEXT: [[FMINNUM_IEEE:%[0-9]+]]:vgpr(<2 x s16>) = nnan G_FMINNUM_IEEE [[FMAXNUM_IEEE]], [[COPY2]]
; CHECK-NEXT: $vgpr0 = COPY [[FMINNUM_IEEE]](<2 x s16>)
%0:vgpr(<2 x s16>) = COPY $vgpr0
- %3:sgpr(s16) = G_FCONSTANT half 0xH4000
+ %3:sgpr(s16) = G_FCONSTANT half f0x4000
%9:sgpr(s32) = G_ANYEXT %3(s16)
%2:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %9(s32), %9(s32)
- %6:sgpr(s16) = G_FCONSTANT half 0xH4400
+ %6:sgpr(s16) = G_FCONSTANT half f0x4400
%10:sgpr(s32) = G_ANYEXT %6(s16)
%5:sgpr(<2 x s16>) = G_BUILD_VECTOR_TRUNC %10(s32), %10(s32)
%11:vgpr(<2 x s16>) = COPY %2(<2 x s16>)
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir
index bd699956500ca3..d5cda8808f795d 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir
@@ -27,7 +27,7 @@ legalized: true
body: |
bb.0:
; CHECK-LABEL: name: test_fconstant_f16_1
- ; CHECK: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half 0xH3C00
+ ; CHECK: [[C:%[0-9]+]]:sgpr(s16) = G_FCONSTANT half f0x3C00
; CHECK-NEXT: [[ANYEXT:%[0-9]+]]:sgpr(s32) = G_ANYEXT [[C]](s16)
%0:_(s16) = G_FCONSTANT half 1.0
%1:_(s32) = G_ANYEXT %0
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll
index cab8e0b8baaa52..c2aba22dcb3d7f 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll
@@ -303,7 +303,7 @@ declare half @llvm.canonicalize.f16(half)
; CHECK-LABEL: {{^}}bitcast_f32_to_v1i32:
define amdgpu_kernel void @bitcast_f32_to_v1i32(ptr addrspace(1) %out) {
- %f16 = call arcp afn half @llvm.canonicalize.f16(half 0xH03F0)
+ %f16 = call arcp afn half @llvm.canonicalize.f16(half f0x03F0)
%f32 = fpext half %f16 to float
%v = bitcast float %f32 to <1 x i32>
%v1 = extractelement <1 x i32> %v, i32 0
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
index 598b4a5fcbd336..e6e327685f1511 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
@@ -483,7 +483,7 @@ define i32 @select_add_bitcast_select(i1 %cond) {
; multiple uses.
define <2 x half> @multi_use_cast_regression(i1 %cond) {
; IR-LABEL: @multi_use_cast_regression(
-; IR-NEXT: [[SELECT:%.*]] = select i1 [[COND:%.*]], half 0xH3C00, half 0xH0000
+; IR-NEXT: [[SELECT:%.*]] = select i1 [[COND:%.*]], half f0x3C00, half f0x0000
; IR-NEXT: [[FPEXT:%.*]] = fpext half [[SELECT]] to float
; IR-NEXT: [[FSUB:%.*]] = fsub nsz float 1.000000e+00, [[FPEXT]]
; IR-NEXT: [[CALL:%.*]] = call nsz <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[FPEXT]], float [[FSUB]])
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll
index b494ff8ba1f5dd..8772c21999f3df 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll
@@ -2067,7 +2067,7 @@ define half @test_pow_afn_f16_nnan_ninf__y_4(half %x) {
define half @test_pow_afn_f16_nnan_ninf__y_4_5(half %x) {
; CHECK-LABEL: define half @test_pow_afn_f16_nnan_ninf__y_4_5
; CHECK-SAME: (half [[X:%.*]]) {
-; CHECK-NEXT: [[POW:%.*]] = tail call nnan ninf afn half @_Z3powDhDh(half [[X]], half 0xH4480)
+; CHECK-NEXT: [[POW:%.*]] = tail call nnan ninf afn half @_Z3powDhDh(half [[X]], half f0x4480)
; CHECK-NEXT: ret half [[POW]]
;
%pow = tail call afn nnan ninf half @_Z3powDhDh(half %x, half 4.5)
@@ -2112,7 +2112,7 @@ define half @test_pow_afn_f16_nnan_ninf__y_neg5(half %x) {
; CHECK-NEXT: [[__POWX2:%.*]] = fmul nnan ninf afn half [[X]], [[X]]
; CHECK-NEXT: [[__POWX21:%.*]] = fmul nnan ninf afn half [[__POWX2]], [[__POWX2]]
; CHECK-NEXT: [[__POWPROD:%.*]] = fmul nnan ninf afn half [[X]], [[__POWX21]]
-; CHECK-NEXT: [[__1POWPROD:%.*]] = fdiv nnan ninf afn half 0xH3C00, [[__POWPROD]]
+; CHECK-NEXT: [[__1POWPROD:%.*]] = fdiv nnan ninf afn half f0x3C00, [[__POWPROD]]
; CHECK-NEXT: ret half [[__1POWPROD]]
;
%pow = tail call afn nnan ninf half @_Z3powDhDh(half %x, half -5.0)
@@ -2157,7 +2157,7 @@ define <2 x half> @test_pow_afn_v2f16_nnan_ninf__y_4(<2 x half> %x) {
define <2 x half> @test_pow_afn_v2f16_nnan_ninf__y_4_5(<2 x half> %x) {
; CHECK-LABEL: define <2 x half> @test_pow_afn_v2f16_nnan_ninf__y_4_5
; CHECK-SAME: (<2 x half> [[X:%.*]]) {
-; CHECK-NEXT: [[POW:%.*]] = tail call nnan ninf afn <2 x half> @_Z3powDv2_DhS_(<2 x half> [[X]], <2 x half> splat (half 0xH4480))
+; CHECK-NEXT: [[POW:%.*]] = tail call nnan ninf afn <2 x half> @_Z3powDv2_DhS_(<2 x half> [[X]], <2 x half> splat (half f0x4480))
; CHECK-NEXT: ret <2 x half> [[POW]]
;
%pow = tail call afn nnan ninf <2 x half> @_Z3powDv2_DhS_(<2 x half> %x, <2 x half> <half 4.5, half 4.5>)
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll
index a2d5ce2d658b57..33c519beecc524 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll
@@ -292,7 +292,7 @@ define half @test_rootn_f16_3(half %x) {
define half @test_rootn_f16_neg1(half %x) {
; CHECK-LABEL: define half @test_rootn_f16_neg1(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[__ROOTN2DIV:%.*]] = fdiv half 0xH3C00, [[X]]
+; CHECK-NEXT: [[__ROOTN2DIV:%.*]] = fdiv half f0x3C00, [[X]]
; CHECK-NEXT: ret half [[__ROOTN2DIV]]
;
%call = tail call half @_Z5rootnDhi(half %x, i32 -1)
@@ -303,7 +303,7 @@ define half @test_rootn_f16_neg2(half %x) {
; CHECK-LABEL: define half @test_rootn_f16_neg2(
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: [[TMP1:%.*]] = call contract half @llvm.sqrt.f16(half [[X]])
-; CHECK-NEXT: [[__ROOTN2RSQRT:%.*]] = fdiv contract half 0xH3C00, [[TMP1]], !fpmath [[META0]]
+; CHECK-NEXT: [[__ROOTN2RSQRT:%.*]] = fdiv contract half f0x3C00, [[TMP1]], !fpmath [[META0]]
; CHECK-NEXT: ret half [[__ROOTN2RSQRT]]
;
%call = tail call half @_Z5rootnDhi(half %x, i32 -2)
@@ -362,7 +362,7 @@ define <2 x half> @test_rootn_v2f16_2(<2 x half> %x) {
define <2 x half> @test_rootn_v2f16_neg1(<2 x half> %x) {
; CHECK-LABEL: define <2 x half> @test_rootn_v2f16_neg1(
; CHECK-SAME: <2 x half> [[X:%.*]]) {
-; CHECK-NEXT: [[__ROOTN2DIV:%.*]] = fdiv <2 x half> splat (half 0xH3C00), [[X]]
+; CHECK-NEXT: [[__ROOTN2DIV:%.*]] = fdiv <2 x half> splat (half f0x3C00), [[X]]
; CHECK-NEXT: ret <2 x half> [[__ROOTN2DIV]]
;
%call = tail call <2 x half> @_Z5rootnDv2_DhDv2_i(<2 x half> %x, <2 x i32> <i32 -1, i32 -1>)
@@ -373,7 +373,7 @@ define <2 x half> @test_rootn_v2f16_neg2(<2 x half> %x) {
; CHECK-LABEL: define <2 x half> @test_rootn_v2f16_neg2(
; CHECK-SAME: <2 x half> [[X:%.*]]) {
; CHECK-NEXT: [[TMP1:%.*]] = call contract <2 x half> @llvm.sqrt.v2f16(<2 x half> [[X]])
-; CHECK-NEXT: [[__ROOTN2RSQRT:%.*]] = fdiv contract <2 x half> splat (half 0xH3C00), [[TMP1]], !fpmath [[META0]]
+; CHECK-NEXT: [[__ROOTN2RSQRT:%.*]] = fdiv contract <2 x half> splat (half f0x3C00), [[TMP1]], !fpmath [[META0]]
; CHECK-NEXT: ret <2 x half> [[__ROOTN2RSQRT]]
;
%call = tail call <2 x half> @_Z5rootnDv2_DhDv2_i(<2 x half> %x, <2 x i32> <i32 -2, i32 -2>)
diff --git a/llvm/test/CodeGen/AMDGPU/br_cc.f16.ll b/llvm/test/CodeGen/AMDGPU/br_cc.f16.ll
index 98832aaa3bc255..6c552fa9770f00 100644
--- a/llvm/test/CodeGen/AMDGPU/br_cc.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/br_cc.f16.ll
@@ -174,11 +174,11 @@ define amdgpu_kernel void @br_cc_f16_imm_a(
ptr addrspace(1) %b) {
entry:
%b.val = load half, ptr addrspace(1) %b
- %fcmp = fcmp olt half 0xH3800, %b.val
+ %fcmp = fcmp olt half f0x3800, %b.val
br i1 %fcmp, label %one, label %two
one:
- store half 0xH3800, ptr addrspace(1) %r
+ store half f0x3800, ptr addrspace(1) %r
ret void
two:
@@ -258,7 +258,7 @@ define amdgpu_kernel void @br_cc_f16_imm_b(
ptr addrspace(1) %a) {
entry:
%a.val = load half, ptr addrspace(1) %a
- %fcmp = fcmp olt half %a.val, 0xH3800
+ %fcmp = fcmp olt half %a.val, f0x3800
br i1 %fcmp, label %one, label %two
one:
@@ -266,6 +266,6 @@ one:
ret void
two:
- store half 0xH3800, ptr addrspace(1) %r
+ store half f0x3800, ptr addrspace(1) %r
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll b/llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll
index efe8f9303e2dda..0adea03dcfbc1f 100644
--- a/llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll
+++ b/llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll
@@ -12,7 +12,7 @@ bb:
bb1:
%tmp = phi <2 x i16> [ <i16 15360, i16 15360>, %bb ], [ %tmp5, %bb1 ]
- %tmp2 = phi half [ 0xH0000, %bb ], [ %tmp8, %bb1 ]
+ %tmp2 = phi half [ f0x0000, %bb ], [ %tmp8, %bb1 ]
%tmp3 = load volatile half, ptr null, align 536870912
%tmp4 = bitcast half %tmp3 to i16
%tmp5 = insertelement <2 x i16> <i16 0, i16 undef>, i16 %tmp4, i32 1
diff --git a/llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll b/llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll
index b128be2186df29..8ba5d6a4719bc1 100644
--- a/llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll
+++ b/llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll
@@ -2452,7 +2452,7 @@ define half @fmul_select_f16_test10_sel_log2val_neg11_pos11(half %x, i32 %bool.a
; GFX11-GISEL-NEXT: v_ldexp_f16_e32 v0, v0, v1
; GFX11-GISEL-NEXT: s_setpc_b64 s[30:31]
%bool = icmp eq i32 %bool.arg1, %bool.arg2
- %y = select i1 %bool, half 0xH1000, half 0xH6800
+ %y = select i1 %bool, half f0x1000, half f0x6800
%ldexp = fmul half %x, %y
ret half %ldexp
}
@@ -2535,7 +2535,7 @@ define half @fmul_select_f16_test11_sel_log2val_pos7_neg14(half %x, i32 %bool.ar
; GFX11-GISEL-NEXT: v_ldexp_f16_e32 v0, v0, v1
; GFX11-GISEL-NEXT: s_setpc_b64 s[30:31]
%bool = icmp eq i32 %bool.arg1, %bool.arg2
- %y = select i1 %bool, half 0xH5800, half 0xH0400
+ %y = select i1 %bool, half f0x5800, half f0x0400
%ldexp = fmul half %x, %y
ret half %ldexp
}
@@ -3771,7 +3771,7 @@ define bfloat @fmul_select_bf16_test10_sel_log2val_pos65_pos56(bfloat %x, i32 %b
; GFX11-GISEL-NEXT: v_ldexp_f16_e64 v0, -v0, v1
; GFX11-GISEL-NEXT: s_setpc_b64 s[30:31]
%bool = icmp eq i32 %bool.arg1, %bool.arg2
- %y = select i1 %bool, bfloat 0xRE000, bfloat 0xRDB80
+ %y = select i1 %bool, bfloat f0xE000, bfloat f0xDB80
%ldexp = fmul bfloat %x, %y
ret bfloat %ldexp
}
@@ -3883,7 +3883,7 @@ define bfloat @fmul_select_bf16_test11_sel_log2val_neg22_pos25(bfloat %x, i32 %b
; GFX11-GISEL-NEXT: v_ldexp_f16_e32 v0, v0, v1
; GFX11-GISEL-NEXT: s_setpc_b64 s[30:31]
%bool = icmp eq i32 %bool.arg1, %bool.arg2
- %y = select i1 %bool, bfloat 0xR3480, bfloat 0xR4C00
+ %y = select i1 %bool, bfloat f0x3480, bfloat f0x4C00
%ldexp = fmul bfloat %x, %y
ret bfloat %ldexp
}
diff --git a/llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll b/llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll
index efbbe2b27f10f9..b74cfce1b1aa7d 100644
--- a/llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll
+++ b/llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll
@@ -478,8 +478,8 @@ F:
exit:
%m = phi <8 x half> [ %t, %T ], [ %f, %F ]
%v2 = shufflevector <8 x half> %m, <8 x half> undef, <4 x i32> <i32 0, i32 1, i32 2, i32 2>
- %b2 = fcmp ugt <4 x half> %v2, <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>
- %r2 = select <4 x i1> %b2, <4 x half> <half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900>, <4 x half> <half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00>
+ %b2 = fcmp ugt <4 x half> %v2, <half f0x3800, half f0x3800, half f0x3800, half f0x3800>
+ %r2 = select <4 x i1> %b2, <4 x half> <half f0x3900, half f0x3900, half f0x3900, half f0x3900>, <4 x half> <half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00>
ret <4 x half> %r2
}
@@ -1090,8 +1090,8 @@ F:
exit:
%m = phi <16 x half> [ %t, %T ], [ %f, %F ]
%v2 = shufflevector <16 x half> %m, <16 x half> undef, <4 x i32> <i32 0, i32 1, i32 2, i32 2>
- %b2 = fcmp ugt <4 x half> %v2, <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>
- %r2 = select <4 x i1> %b2, <4 x half> <half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900>, <4 x half> <half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00>
+ %b2 = fcmp ugt <4 x half> %v2, <half f0x3800, half f0x3800, half f0x3800, half f0x3800>
+ %r2 = select <4 x i1> %b2, <4 x half> <half f0x3900, half f0x3900, half f0x3900, half f0x3900>, <4 x half> <half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00>
ret <4 x half> %r2
}
@@ -1739,7 +1739,7 @@ F:
exit:
%m = phi <16 x half> [ %t, %T ], [ %f, %F ]
%v2 = shufflevector <16 x half> %m, <16 x half> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- %b2 = fcmp ugt <8 x half> %v2, <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>
- %r2 = select <8 x i1> %b2, <8 x half> <half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900, half 0xH3900>, <8 x half> <half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00, half 0xH3D00>
+ %b2 = fcmp ugt <8 x half> %v2, <half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800>
+ %r2 = select <8 x i1> %b2, <8 x half> <half f0x3900, half f0x3900, half f0x3900, half f0x3900, half f0x3900, half f0x3900, half f0x3900, half f0x3900>, <8 x half> <half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00, half f0x3D00>
ret <8 x half> %r2
}
diff --git a/llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll b/llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll
index 3199b76d279fab..4b56b205496d43 100644
--- a/llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll
@@ -720,7 +720,7 @@ define amdgpu_kernel void @test_default_denormals_fold_canonicalize_denormal0_f1
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH03FF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x03FF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -762,7 +762,7 @@ define amdgpu_kernel void @test_denormals_fold_canonicalize_denormal0_f16(ptr ad
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH03FF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x03FF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -804,7 +804,7 @@ define amdgpu_kernel void @test_default_denormals_fold_canonicalize_denormal1_f1
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH83FF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x83FF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -846,7 +846,7 @@ define amdgpu_kernel void @test_denormals_fold_canonicalize_denormal1_f16(ptr ad
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH83FF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x83FF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -888,7 +888,7 @@ define amdgpu_kernel void @test_fold_canonicalize_qnan_f16(ptr addrspace(1) %out
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH7C00)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x7C00)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1014,7 +1014,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan0_value_f16(ptr addrspace(
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH7C01)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x7C01)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1056,7 +1056,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan1_value_f16(ptr addrspace(
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xH7DFF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0x7DFF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1098,7 +1098,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan2_value_f16(ptr addrspace(
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xHFDFF)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0xFDFF)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1140,7 +1140,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan3_value_f16(ptr addrspace(
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b16 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call half @llvm.canonicalize.f16(half 0xHFC01)
+ %canonicalized = call half @llvm.canonicalize.f16(half f0xFC01)
store half %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1757,7 +1757,7 @@ define amdgpu_kernel void @test_no_denormals_fold_canonicalize_denormal0_v2f16(p
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH03FF, half 0xH03FF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x03FF, half f0x03FF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1799,7 +1799,7 @@ define amdgpu_kernel void @test_denormals_fold_canonicalize_denormal0_v2f16(ptr
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH03FF, half 0xH03FF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x03FF, half f0x03FF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1841,7 +1841,7 @@ define amdgpu_kernel void @test_no_denormals_fold_canonicalize_denormal1_v2f16(p
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH83FF, half 0xH83FF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x83FF, half f0x83FF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1883,7 +1883,7 @@ define amdgpu_kernel void @test_denormals_fold_canonicalize_denormal1_v2f16(ptr
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH83FF, half 0xH83FF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x83FF, half f0x83FF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -1925,7 +1925,7 @@ define amdgpu_kernel void @test_fold_canonicalize_qnan_v2f16(ptr addrspace(1) %o
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH7C00, half 0xH7C00>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x7C00, half f0x7C00>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -2051,7 +2051,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan0_value_v2f16(ptr addrspac
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH7C01, half 0xH7C01>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x7C01, half f0x7C01>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -2093,7 +2093,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan1_value_v2f16(ptr addrspac
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xH7DFF, half 0xH7DFF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0x7DFF, half f0x7DFF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -2135,7 +2135,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan2_value_v2f16(ptr addrspac
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xHFDFF, half 0xHFDFF>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0xFDFF, half f0xFDFF>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
@@ -2177,7 +2177,7 @@ define amdgpu_kernel void @test_fold_canonicalize_snan3_value_v2f16(ptr addrspac
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
- %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half 0xHFC01, half 0xHFC01>)
+ %canonicalized = call <2 x half> @llvm.canonicalize.v2f16(<2 x half> <half f0xFC01, half f0xFC01>)
store <2 x half> %canonicalized, ptr addrspace(1) %out
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll b/llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll
index 90fbb6c12382a0..a483e325b5bf88 100644
--- a/llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll
+++ b/llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll
@@ -56,7 +56,7 @@ define amdgpu_kernel void @load_i16_hi(ptr %arg, ptr %out) {
define amdgpu_kernel void @load_half_lo(ptr %arg, ptr %out) {
%gep = getelementptr inbounds half, ptr %arg, i32 4
%ld = load half, ptr %gep, align 2
- %vec = insertelement <2 x half> <half undef, half 0xH0000>, half %ld, i32 0
+ %vec = insertelement <2 x half> <half undef, half f0x0000>, half %ld, i32 0
%v = fadd <2 x half> %vec, %vec
store <2 x half> %v, ptr %out, align 4
ret void
@@ -68,7 +68,7 @@ define amdgpu_kernel void @load_half_lo(ptr %arg, ptr %out) {
define amdgpu_kernel void @load_half_hi(ptr %arg, ptr %out) {
%gep = getelementptr inbounds half, ptr %arg, i32 4
%ld = load half, ptr %gep, align 2
- %vec = insertelement <2 x half> <half 0xH0000, half undef>, half %ld, i32 1
+ %vec = insertelement <2 x half> <half f0x0000, half undef>, half %ld, i32 1
%v = fadd <2 x half> %vec, %vec
store <2 x half> %v, ptr %out, align 4
ret void
diff --git a/llvm/test/CodeGen/AMDGPU/fma.f16.ll b/llvm/test/CodeGen/AMDGPU/fma.f16.ll
index 822d40f7349b0f..a7f20c3f63619f 100644
--- a/llvm/test/CodeGen/AMDGPU/fma.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/fma.f16.ll
@@ -113,7 +113,7 @@ define half @test_fmaak(half %x, half %y, half %z) {
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_fmaak_f16 v0, v0, v1, 0x4200
; GFX12-NEXT: s_setpc_b64 s[30:31]
- %r = call half @llvm.fma.f16(half %x, half %y, half 0xH4200)
+ %r = call half @llvm.fma.f16(half %x, half %y, half f0x4200)
ret half %r
}
@@ -154,7 +154,7 @@ define half @test_fmamk(half %x, half %y, half %z) {
; GFX12-NEXT: s_wait_kmcnt 0x0
; GFX12-NEXT: v_fmamk_f16 v0, v0, 0x4200, v2
; GFX12-NEXT: s_setpc_b64 s[30:31]
- %r = call half @llvm.fma.f16(half %x, half 0xH4200, half %z)
+ %r = call half @llvm.fma.f16(half %x, half f0x4200, half %z)
ret half %r
}
@@ -272,10 +272,10 @@ define i32 @test_D139469_f16(half %arg) {
; GFX12-GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, s0
; GFX12-GISEL-NEXT: s_setpc_b64 s[30:31]
bb:
- %i = fmul contract half %arg, 0xH291E
- %i1 = fcmp olt half %i, 0xH0000
- %i2 = fadd contract half %i, 0xH211E
- %i3 = fcmp olt half %i2, 0xH0000
+ %i = fmul contract half %arg, f0x291E
+ %i1 = fcmp olt half %i, f0x0000
+ %i2 = fadd contract half %i, f0x211E
+ %i3 = fcmp olt half %i2, f0x0000
%i4 = or i1 %i1, %i3
%i5 = zext i1 %i4 to i32
ret i32 %i5
@@ -434,10 +434,10 @@ define <2 x i32> @test_D139469_v2f16(<2 x half> %arg) {
; GFX12-GISEL-NEXT: v_cndmask_b32_e64 v1, 0, 1, s0
; GFX12-GISEL-NEXT: s_setpc_b64 s[30:31]
bb:
- %i = fmul contract <2 x half> %arg, <half 0xH291E, half 0xH291E>
- %i1 = fcmp olt <2 x half> %i, <half 0xH0000, half 0xH0000>
- %i2 = fadd contract <2 x half> %i, <half 0xH211E, half 0xH211E>
- %i3 = fcmp olt <2 x half> %i2, <half 0xH0000, half 0xH0000>
+ %i = fmul contract <2 x half> %arg, <half f0x291E, half f0x291E>
+ %i1 = fcmp olt <2 x half> %i, <half f0x0000, half f0x0000>
+ %i2 = fadd contract <2 x half> %i, <half f0x211E, half f0x211E>
+ %i3 = fcmp olt <2 x half> %i2, <half f0x0000, half f0x0000>
%i4 = or <2 x i1> %i1, %i3
%i5 = zext <2 x i1> %i4 to <2 x i32>
ret <2 x i32> %i5
diff --git a/llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll b/llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll
index 9ae60f99d5e094..773e1b294d63e8 100644
--- a/llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll
+++ b/llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll
@@ -2670,7 +2670,7 @@ define half @v_mul_0x1pn23_f16(half %x) {
; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GCN-NEXT: v_mul_f16_e32 v0, 2, v0
; GCN-NEXT: s_setpc_b64 s[30:31]
- %mul = fmul half %x, 0xH0002
+ %mul = fmul half %x, f0x0002
ret half %mul
}
diff --git a/llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll b/llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll
index b32630a97b3ad0..d2c6a638e78912 100644
--- a/llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll
@@ -1236,7 +1236,7 @@ define half @v_fneg_inv2pi_minnum_f16(half %a) #0 {
; GFX11-NEXT: v_min_f16_e32 v0, 0.15915494, v0
; GFX11-NEXT: v_xor_b32_e32 v0, 0x8000, v0
; GFX11-NEXT: s_setpc_b64 s[30:31]
- %min = call half @llvm.minnum.f16(half 0xH3118, half %a)
+ %min = call half @llvm.minnum.f16(half f0x3118, half %a)
%fneg = fneg half %min
ret half %fneg
}
@@ -1266,7 +1266,7 @@ define half @v_fneg_neg_inv2pi_minnum_f16(half %a) #0 {
; GFX11-NEXT: v_min_f16_e32 v0, 0.15915494, v0
; GFX11-NEXT: v_xor_b32_e32 v0, 0x8000, v0
; GFX11-NEXT: s_setpc_b64 s[30:31]
- %min = call half @llvm.minnum.f16(half 0xH3118, half %a)
+ %min = call half @llvm.minnum.f16(half f0x3118, half %a)
%fneg = fneg half %min
ret half %fneg
}
@@ -1358,7 +1358,7 @@ define half @v_fneg_inv2pi_minnum_foldable_use_f16(half %a, half %b) #0 {
; GFX11-NEXT: v_min_f16_e32 v0, 0.15915494, v0
; GFX11-NEXT: v_mul_f16_e64 v0, -v0, v1
; GFX11-NEXT: s_setpc_b64 s[30:31]
- %min = call half @llvm.minnum.f16(half 0xH3118, half %a)
+ %min = call half @llvm.minnum.f16(half f0x3118, half %a)
%fneg = fneg half %min
%mul = fmul half %fneg, %b
ret half %mul
diff --git a/llvm/test/CodeGen/AMDGPU/fneg-combines.ll b/llvm/test/CodeGen/AMDGPU/fneg-combines.ll
index 0cb4b8c960bbfd..1086d2aeef0baa 100644
--- a/llvm/test/CodeGen/AMDGPU/fneg-combines.ll
+++ b/llvm/test/CodeGen/AMDGPU/fneg-combines.ll
@@ -665,7 +665,7 @@ define amdgpu_kernel void @v_fneg_inv2pi_minnum_f16(ptr addrspace(1) %out, ptr a
%a.gep = getelementptr inbounds half, ptr addrspace(1) %a.ptr, i64 %tid.ext
%out.gep = getelementptr inbounds half, ptr addrspace(1) %out, i64 %tid.ext
%a = load volatile half, ptr addrspace(1) %a.gep
- %min = call half @llvm.minnum.f16(half 0xH3118, half %a)
+ %min = call half @llvm.minnum.f16(half f0x3118, half %a)
%fneg = fsub half -0.000000e+00, %min
store half %fneg, ptr addrspace(1) %out.gep
ret void
@@ -688,7 +688,7 @@ define amdgpu_kernel void @v_fneg_neg_inv2pi_minnum_f16(ptr addrspace(1) %out, p
%a.gep = getelementptr inbounds half, ptr addrspace(1) %a.ptr, i64 %tid.ext
%out.gep = getelementptr inbounds half, ptr addrspace(1) %out, i64 %tid.ext
%a = load volatile half, ptr addrspace(1) %a.gep
- %min = call half @llvm.minnum.f16(half 0xHB118, half %a)
+ %min = call half @llvm.minnum.f16(half f0xB118, half %a)
%fneg = fsub half -0.000000e+00, %min
store half %fneg, ptr addrspace(1) %out.gep
ret void
diff --git a/llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll b/llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll
index 9a72fe96b5c3af..22fdc43ae3fc58 100644
--- a/llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll
+++ b/llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll
@@ -1031,7 +1031,7 @@ define half @v_fneg_inv2pi_minnum_f16(half %a) #0 {
; VI-NEXT: v_min_f16_e32 v0, 0.15915494, v0
; VI-NEXT: v_xor_b32_e32 v0, 0x8000, v0
; VI-NEXT: s_setpc_b64 s[30:31]
- %min = call half @llvm.minnum.f16(half 0xH3118, half %a)
+ %min = call half @llvm.minnum.f16(half f0x3118, half %a)
%fneg = fneg half %min
ret half %fneg
}
@@ -1051,7 +1051,7 @@ define half @v_fneg_neg_inv2pi_minnum_f16(half %a) #0 {
; VI-NEXT: v_max_f16_e64 v0, -v0, -v0
; VI-NEXT: v_max_f16_e32 v0, 0.15915494, v0
; VI-NEXT: s_setpc_b64 s[30:31]
- %min = call half @llvm.minnum.f16(half 0xHB118, half %a)
+ %min = call half @llvm.minnum.f16(half f0xB118, half %a)
%fneg = fneg half %min
ret half %fneg
}
diff --git a/llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir b/llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir
index 919641f7e70d37..6192ae31c24d27 100644
--- a/llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir
+++ b/llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir
@@ -5,7 +5,7 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile float, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
%f32.add = fadd float %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile float %f32.add, ptr addrspace(1) undef
@@ -16,7 +16,7 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile float, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
%f32.add = fadd float %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile float %f32.add, ptr addrspace(1) undef
@@ -27,7 +27,7 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile float, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
%f32.add = fadd float %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile float %f32.add, ptr addrspace(1) undef
@@ -38,8 +38,8 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile float, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
- %f16.add1 = fadd half %f16.val1, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
+ %f16.add1 = fadd half %f16.val1, f0x3C00
%f32.add = fadd float %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile half %f16.add1, ptr addrspace(1) undef
@@ -50,8 +50,8 @@
define amdgpu_kernel void @add_i32_1_multi_f16_use() #0 {
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH0001
- %f16.add1 = fadd half %f16.val1, 0xH0001
+ %f16.add0 = fadd half %f16.val0, f0x0001
+ %f16.add1 = fadd half %f16.val1, f0x0001
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile half %f16.add1,ptr addrspace(1) undef
ret void
@@ -61,8 +61,8 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile float, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xHFFFE
- %f16.add1 = fadd half %f16.val1, 0xHFFFE
+ %f16.add0 = fadd half %f16.val0, f0xFFFE
+ %f16.add1 = fadd half %f16.val1, f0xFFFE
%f32.add = fadd float %f32.val, 0xffffffffc0000000
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile half %f16.add1, ptr addrspace(1) undef
@@ -85,7 +85,7 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile half, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
%f32.add = fadd half %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile half %f32.add, ptr addrspace(1) undef
@@ -96,7 +96,7 @@
%f16.val0 = load volatile half, ptr addrspace(1) undef
%f16.val1 = load volatile half, ptr addrspace(1) undef
%f32.val = load volatile half, ptr addrspace(1) undef
- %f16.add0 = fadd half %f16.val0, 0xH3C00
+ %f16.add0 = fadd half %f16.val0, f0x3C00
%f32.add = fadd half %f32.val, 1.000000e+00
store volatile half %f16.add0, ptr addrspace(1) undef
store volatile half %f32.add, ptr addrspace(1) undef
diff --git a/llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll b/llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll
index 2eb35977b8160b..b2410127e4ffb8 100644
--- a/llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll
+++ b/llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll
@@ -243,7 +243,7 @@ define <8 x half> @fmul_pow2_8xhalf(<8 x i16> %i) {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%p2 = shl <8 x i16> <i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1>, %i
%p2_f = uitofp <8 x i16> %p2 to <8 x half>
- %r = fmul <8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, %p2_f
+ %r = fmul <8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, %p2_f
ret <8 x half> %r
}
@@ -306,7 +306,7 @@ define <8 x half> @fmul_pow2_ldexp_8xhalf(<8 x i16> %i) {
; GFX11-NEXT: v_pack_b32_f16 v2, v5, v2
; GFX11-NEXT: v_pack_b32_f16 v3, v4, v3
; GFX11-NEXT: s_setpc_b64 s[30:31]
- %r = call <8 x half> @llvm.ldexp.v8f16.v8i16(<8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, <8 x i16> %i)
+ %r = call <8 x half> @llvm.ldexp.v8f16.v8i16(<8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, <8 x i16> %i)
ret <8 x half> %r
}
@@ -367,7 +367,7 @@ define <8 x half> @fdiv_pow2_8xhalf(<8 x i16> %i) {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%p2 = shl <8 x i16> <i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1>, %i
%p2_f = uitofp <8 x i16> %p2 to <8 x half>
- %r = fdiv <8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, %p2_f
+ %r = fdiv <8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, %p2_f
ret <8 x half> %r
}
@@ -1507,7 +1507,7 @@ define half @fdiv_pow_shl_cnt_fail_out_of_bounds(i32 %cnt) nounwind {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%shl = shl nuw i32 1, %cnt
%conv = uitofp i32 %shl to half
- %mul = fdiv half 0xH7000, %conv
+ %mul = fdiv half f0x7000, %conv
ret half %mul
}
@@ -1535,7 +1535,7 @@ define half @fdiv_pow_shl_cnt_in_bounds(i16 %cnt) nounwind {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH7000, %conv
+ %mul = fdiv half f0x7000, %conv
ret half %mul
}
@@ -1563,7 +1563,7 @@ define half @fdiv_pow_shl_cnt_in_bounds2(i16 %cnt) nounwind {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH4800, %conv
+ %mul = fdiv half f0x4800, %conv
ret half %mul
}
@@ -1631,7 +1631,7 @@ define half @fdiv_pow_shl_cnt_fail_out_of_bound2(i16 %cnt) nounwind {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH4000, %conv
+ %mul = fdiv half f0x4000, %conv
ret half %mul
}
diff --git a/llvm/test/CodeGen/AMDGPU/fp-classify.ll b/llvm/test/CodeGen/AMDGPU/fp-classify.ll
index e7c425a2d2752d..6c574b760bad9c 100644
--- a/llvm/test/CodeGen/AMDGPU/fp-classify.ll
+++ b/llvm/test/CodeGen/AMDGPU/fp-classify.ll
@@ -632,7 +632,7 @@ define amdgpu_kernel void @test_isinf_pattern_f16(ptr addrspace(1) nocapture %ou
; GFX11-NEXT: global_store_b32 v0, v1, s[0:1]
; GFX11-NEXT: s_endpgm
%fabs = tail call half @llvm.fabs.f16(half %x) #1
- %cmp = fcmp oeq half %fabs, 0xH7C00
+ %cmp = fcmp oeq half %fabs, f0x7C00
%ext = zext i1 %cmp to i32
store i32 %ext, ptr addrspace(1) %out, align 4
ret void
@@ -683,7 +683,7 @@ define amdgpu_kernel void @test_isfinite_pattern_0_f16(ptr addrspace(1) nocaptur
; GFX11-NEXT: s_endpgm
%ord = fcmp ord half %x, 0.0
%x.fabs = tail call half @llvm.fabs.f16(half %x) #1
- %ninf = fcmp une half %x.fabs, 0xH7C00
+ %ninf = fcmp une half %x.fabs, f0x7C00
%and = and i1 %ord, %ninf
%ext = zext i1 %and to i32
store i32 %ext, ptr addrspace(1) %out, align 4
@@ -732,7 +732,7 @@ define amdgpu_kernel void @test_isfinite_pattern_4_f16(ptr addrspace(1) nocaptur
; GFX11-NEXT: s_endpgm
%ord = fcmp ord half %x, 0.0
%x.fabs = tail call half @llvm.fabs.f16(half %x) #1
- %ninf = fcmp one half %x.fabs, 0xH7C00
+ %ninf = fcmp one half %x.fabs, f0x7C00
%and = and i1 %ord, %ninf
%ext = zext i1 %and to i32
store i32 %ext, ptr addrspace(1) %out, align 4
diff --git a/llvm/test/CodeGen/AMDGPU/fract-match.ll b/llvm/test/CodeGen/AMDGPU/fract-match.ll
index 80b4d64b1236f6..8000a3661e9697 100644
--- a/llvm/test/CodeGen/AMDGPU/fract-match.ll
+++ b/llvm/test/CodeGen/AMDGPU/fract-match.ll
@@ -1441,7 +1441,7 @@ define half @basic_fract_f16_nonan(half nofpclass(nan) %x) {
; GFX6-IR-NEXT: entry:
; GFX6-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX6-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
+; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
; GFX6-IR-NEXT: ret half [[MIN]]
;
; GFX7-IR-LABEL: define half @basic_fract_f16_nonan
@@ -1449,7 +1449,7 @@ define half @basic_fract_f16_nonan(half nofpclass(nan) %x) {
; GFX7-IR-NEXT: entry:
; GFX7-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX7-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
+; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
; GFX7-IR-NEXT: ret half [[MIN]]
;
; IR-LEGALF16-LABEL: define half @basic_fract_f16_nonan
@@ -1502,7 +1502,7 @@ define half @basic_fract_f16_nonan(half nofpclass(nan) %x) {
entry:
%floor = tail call half @llvm.floor.f16(half %x)
%sub = fsub half %x, %floor
- %min = tail call half @llvm.minnum.f16(half %sub, half 0xH3BFF)
+ %min = tail call half @llvm.minnum.f16(half %sub, half f0x3BFF)
ret half %min
}
@@ -1512,7 +1512,7 @@ define <2 x half> @basic_fract_v2f16_nonan(<2 x half> nofpclass(nan) %x) {
; GFX6-IR-NEXT: entry:
; GFX6-IR-NEXT: [[FLOOR:%.*]] = tail call <2 x half> @llvm.floor.v2f16(<2 x half> [[X]])
; GFX6-IR-NEXT: [[SUB:%.*]] = fsub <2 x half> [[X]], [[FLOOR]]
-; GFX6-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half 0xH3BFF))
+; GFX6-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half f0x3BFF))
; GFX6-IR-NEXT: ret <2 x half> [[MIN]]
;
; GFX7-IR-LABEL: define <2 x half> @basic_fract_v2f16_nonan
@@ -1520,7 +1520,7 @@ define <2 x half> @basic_fract_v2f16_nonan(<2 x half> nofpclass(nan) %x) {
; GFX7-IR-NEXT: entry:
; GFX7-IR-NEXT: [[FLOOR:%.*]] = tail call <2 x half> @llvm.floor.v2f16(<2 x half> [[X]])
; GFX7-IR-NEXT: [[SUB:%.*]] = fsub <2 x half> [[X]], [[FLOOR]]
-; GFX7-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half 0xH3BFF))
+; GFX7-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half f0x3BFF))
; GFX7-IR-NEXT: ret <2 x half> [[MIN]]
;
; IR-LEGALF16-LABEL: define <2 x half> @basic_fract_v2f16_nonan
@@ -1598,7 +1598,7 @@ define <2 x half> @basic_fract_v2f16_nonan(<2 x half> nofpclass(nan) %x) {
entry:
%floor = tail call <2 x half> @llvm.floor.v2f16(<2 x half> %x)
%sub = fsub <2 x half> %x, %floor
- %min = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> %sub, <2 x half> <half 0xH3BFF, half 0xH3BFF>)
+ %min = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> %sub, <2 x half> <half f0x3BFF, half f0x3BFF>)
ret <2 x half> %min
}
@@ -1674,8 +1674,8 @@ define half @safe_math_fract_f16_noinf_check(half %x, ptr addrspace(1) nocapture
; GFX6-IR-NEXT: entry:
; GFX6-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX6-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
-; GFX6-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], 0xH0000
+; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
+; GFX6-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], f0x0000
; GFX6-IR-NEXT: [[COND:%.*]] = select i1 [[UNO]], half [[X]], half [[MIN]]
; GFX6-IR-NEXT: store half [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX6-IR-NEXT: ret half [[COND]]
@@ -1685,8 +1685,8 @@ define half @safe_math_fract_f16_noinf_check(half %x, ptr addrspace(1) nocapture
; GFX7-IR-NEXT: entry:
; GFX7-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX7-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
-; GFX7-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], 0xH0000
+; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
+; GFX7-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], f0x0000
; GFX7-IR-NEXT: [[COND:%.*]] = select i1 [[UNO]], half [[X]], half [[MIN]]
; GFX7-IR-NEXT: store half [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX7-IR-NEXT: ret half [[COND]]
@@ -1768,7 +1768,7 @@ define half @safe_math_fract_f16_noinf_check(half %x, ptr addrspace(1) nocapture
entry:
%floor = tail call half @llvm.floor.f16(half %x)
%sub = fsub half %x, %floor
- %min = tail call half @llvm.minnum.f16(half %sub, half 0xH3BFF)
+ %min = tail call half @llvm.minnum.f16(half %sub, half f0x3BFF)
%uno = fcmp uno half %x, 0.000000e+00
%cond = select i1 %uno, half %x, half %min
store half %floor, ptr addrspace(1) %ip, align 4
@@ -2270,12 +2270,12 @@ define half @safe_math_fract_f16(half %x, ptr addrspace(1) nocapture writeonly %
; GFX6-IR-NEXT: entry:
; GFX6-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX6-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
-; GFX6-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], 0xH0000
+; GFX6-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
+; GFX6-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], f0x0000
; GFX6-IR-NEXT: [[COND:%.*]] = select i1 [[UNO]], half [[X]], half [[MIN]]
; GFX6-IR-NEXT: [[FABS:%.*]] = tail call half @llvm.fabs.f16(half [[X]])
-; GFX6-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
-; GFX6-IR-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half 0xH0000, half [[COND]]
+; GFX6-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
+; GFX6-IR-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half f0x0000, half [[COND]]
; GFX6-IR-NEXT: store half [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX6-IR-NEXT: ret half [[COND6]]
;
@@ -2284,12 +2284,12 @@ define half @safe_math_fract_f16(half %x, ptr addrspace(1) nocapture writeonly %
; GFX7-IR-NEXT: entry:
; GFX7-IR-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; GFX7-IR-NEXT: [[SUB:%.*]] = fsub half [[X]], [[FLOOR]]
-; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half 0xH3BFF)
-; GFX7-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], 0xH0000
+; GFX7-IR-NEXT: [[MIN:%.*]] = tail call half @llvm.minnum.f16(half [[SUB]], half f0x3BFF)
+; GFX7-IR-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], f0x0000
; GFX7-IR-NEXT: [[COND:%.*]] = select i1 [[UNO]], half [[X]], half [[MIN]]
; GFX7-IR-NEXT: [[FABS:%.*]] = tail call half @llvm.fabs.f16(half [[X]])
-; GFX7-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
-; GFX7-IR-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half 0xH0000, half [[COND]]
+; GFX7-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
+; GFX7-IR-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half f0x0000, half [[COND]]
; GFX7-IR-NEXT: store half [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX7-IR-NEXT: ret half [[COND6]]
;
@@ -2299,8 +2299,8 @@ define half @safe_math_fract_f16(half %x, ptr addrspace(1) nocapture writeonly %
; IR-LEGALF16-NEXT: [[FLOOR:%.*]] = tail call half @llvm.floor.f16(half [[X]])
; IR-LEGALF16-NEXT: [[COND:%.*]] = call half @llvm.amdgcn.fract.f16(half [[X]])
; IR-LEGALF16-NEXT: [[FABS:%.*]] = tail call half @llvm.fabs.f16(half [[X]])
-; IR-LEGALF16-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
-; IR-LEGALF16-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half 0xH0000, half [[COND]]
+; IR-LEGALF16-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
+; IR-LEGALF16-NEXT: [[COND6:%.*]] = select i1 [[CMPINF]], half f0x0000, half [[COND]]
; IR-LEGALF16-NEXT: store half [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; IR-LEGALF16-NEXT: ret half [[COND6]]
;
@@ -2390,11 +2390,11 @@ define half @safe_math_fract_f16(half %x, ptr addrspace(1) nocapture writeonly %
entry:
%floor = tail call half @llvm.floor.f16(half %x)
%sub = fsub half %x, %floor
- %min = tail call half @llvm.minnum.f16(half %sub, half 0xH3BFF)
+ %min = tail call half @llvm.minnum.f16(half %sub, half f0x3BFF)
%uno = fcmp uno half %x, 0.000000e+00
%cond = select i1 %uno, half %x, half %min
%fabs = tail call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
%cond6 = select i1 %cmpinf, half 0.000000e+00, half %cond
store half %floor, ptr addrspace(1) %ip, align 4
ret half %cond6
@@ -2406,11 +2406,11 @@ define <2 x half> @safe_math_fract_v2f16(<2 x half> %x, ptr addrspace(1) nocaptu
; GFX6-IR-NEXT: entry:
; GFX6-IR-NEXT: [[FLOOR:%.*]] = tail call <2 x half> @llvm.floor.v2f16(<2 x half> [[X]])
; GFX6-IR-NEXT: [[SUB:%.*]] = fsub <2 x half> [[X]], [[FLOOR]]
-; GFX6-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half 0xH3BFF))
+; GFX6-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half f0x3BFF))
; GFX6-IR-NEXT: [[UNO:%.*]] = fcmp uno <2 x half> [[X]], zeroinitializer
; GFX6-IR-NEXT: [[COND:%.*]] = select <2 x i1> [[UNO]], <2 x half> [[X]], <2 x half> [[MIN]]
; GFX6-IR-NEXT: [[FABS:%.*]] = tail call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X]])
-; GFX6-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half 0xH7C00)
+; GFX6-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half f0x7C00)
; GFX6-IR-NEXT: [[COND6:%.*]] = select <2 x i1> [[CMPINF]], <2 x half> zeroinitializer, <2 x half> [[COND]]
; GFX6-IR-NEXT: store <2 x half> [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX6-IR-NEXT: ret <2 x half> [[COND6]]
@@ -2420,11 +2420,11 @@ define <2 x half> @safe_math_fract_v2f16(<2 x half> %x, ptr addrspace(1) nocaptu
; GFX7-IR-NEXT: entry:
; GFX7-IR-NEXT: [[FLOOR:%.*]] = tail call <2 x half> @llvm.floor.v2f16(<2 x half> [[X]])
; GFX7-IR-NEXT: [[SUB:%.*]] = fsub <2 x half> [[X]], [[FLOOR]]
-; GFX7-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half 0xH3BFF))
+; GFX7-IR-NEXT: [[MIN:%.*]] = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> [[SUB]], <2 x half> splat (half f0x3BFF))
; GFX7-IR-NEXT: [[UNO:%.*]] = fcmp uno <2 x half> [[X]], zeroinitializer
; GFX7-IR-NEXT: [[COND:%.*]] = select <2 x i1> [[UNO]], <2 x half> [[X]], <2 x half> [[MIN]]
; GFX7-IR-NEXT: [[FABS:%.*]] = tail call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X]])
-; GFX7-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half 0xH7C00)
+; GFX7-IR-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half f0x7C00)
; GFX7-IR-NEXT: [[COND6:%.*]] = select <2 x i1> [[CMPINF]], <2 x half> zeroinitializer, <2 x half> [[COND]]
; GFX7-IR-NEXT: store <2 x half> [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; GFX7-IR-NEXT: ret <2 x half> [[COND6]]
@@ -2440,7 +2440,7 @@ define <2 x half> @safe_math_fract_v2f16(<2 x half> %x, ptr addrspace(1) nocaptu
; IR-LEGALF16-NEXT: [[TMP4:%.*]] = insertelement <2 x half> poison, half [[TMP2]], i64 0
; IR-LEGALF16-NEXT: [[COND:%.*]] = insertelement <2 x half> [[TMP4]], half [[TMP3]], i64 1
; IR-LEGALF16-NEXT: [[FABS:%.*]] = tail call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X]])
-; IR-LEGALF16-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half 0xH7C00)
+; IR-LEGALF16-NEXT: [[CMPINF:%.*]] = fcmp oeq <2 x half> [[FABS]], splat (half f0x7C00)
; IR-LEGALF16-NEXT: [[COND6:%.*]] = select <2 x i1> [[CMPINF]], <2 x half> zeroinitializer, <2 x half> [[COND]]
; IR-LEGALF16-NEXT: store <2 x half> [[FLOOR]], ptr addrspace(1) [[IP]], align 4
; IR-LEGALF16-NEXT: ret <2 x half> [[COND6]]
@@ -2579,11 +2579,11 @@ define <2 x half> @safe_math_fract_v2f16(<2 x half> %x, ptr addrspace(1) nocaptu
entry:
%floor = tail call <2 x half> @llvm.floor.v2f16(<2 x half> %x)
%sub = fsub <2 x half> %x, %floor
- %min = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> %sub, <2 x half> <half 0xH3BFF, half 0xH3BFF>)
+ %min = tail call <2 x half> @llvm.minnum.v2f16(<2 x half> %sub, <2 x half> <half f0x3BFF, half f0x3BFF>)
%uno = fcmp uno <2 x half> %x, zeroinitializer
%cond = select <2 x i1> %uno, <2 x half> %x, <2 x half> %min
%fabs = tail call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp oeq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp oeq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cond6 = select <2 x i1> %cmpinf, <2 x half> zeroinitializer, <2 x half> %cond
store <2 x half> %floor, ptr addrspace(1) %ip, align 4
ret <2 x half> %cond6
diff --git a/llvm/test/CodeGen/AMDGPU/imm16.ll b/llvm/test/CodeGen/AMDGPU/imm16.ll
index a2cc427bf6e548..4fdb118e747367 100644
--- a/llvm/test/CodeGen/AMDGPU/imm16.ll
+++ b/llvm/test/CodeGen/AMDGPU/imm16.ll
@@ -534,7 +534,7 @@ define amdgpu_kernel void @store_inline_imm_inv_2pi_f16(ptr addrspace(1) %out) {
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- store half 0xH3118, ptr addrspace(1) %out
+ store half f0x3118, ptr addrspace(1) %out
ret void
}
@@ -578,7 +578,7 @@ define amdgpu_kernel void @store_inline_imm_m_inv_2pi_f16(ptr addrspace(1) %out)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- store half 0xHB118, ptr addrspace(1) %out
+ store half f0xB118, ptr addrspace(1) %out
ret void
}
@@ -1321,7 +1321,7 @@ define amdgpu_kernel void @add_inline_imm_1_f16(ptr addrspace(1) %out, half %x)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- %y = fadd half %x, 0xH0001
+ %y = fadd half %x, f0x0001
store half %y, ptr addrspace(1) %out
ret void
}
@@ -1375,7 +1375,7 @@ define amdgpu_kernel void @add_inline_imm_2_f16(ptr addrspace(1) %out, half %x)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- %y = fadd half %x, 0xH0002
+ %y = fadd half %x, f0x0002
store half %y, ptr addrspace(1) %out
ret void
}
@@ -1429,7 +1429,7 @@ define amdgpu_kernel void @add_inline_imm_16_f16(ptr addrspace(1) %out, half %x)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- %y = fadd half %x, 0xH0010
+ %y = fadd half %x, f0x0010
store half %y, ptr addrspace(1) %out
ret void
}
@@ -1720,7 +1720,7 @@ define amdgpu_kernel void @add_inline_imm_63_f16(ptr addrspace(1) %out, half %x)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- %y = fadd half %x, 0xH003F
+ %y = fadd half %x, f0x003F
store half %y, ptr addrspace(1) %out
ret void
}
@@ -1774,7 +1774,7 @@ define amdgpu_kernel void @add_inline_imm_64_f16(ptr addrspace(1) %out, half %x)
; SI-NEXT: s_waitcnt lgkmcnt(0)
; SI-NEXT: buffer_store_short v0, off, s[0:3], 0
; SI-NEXT: s_endpgm
- %y = fadd half %x, 0xH0040
+ %y = fadd half %x, f0x0040
store half %y, ptr addrspace(1) %out
ret void
}
@@ -2136,7 +2136,7 @@ define void @mul_inline_imm_inv2pi_i16(ptr addrspace(1) %out, i16 %x) {
; SI-NEXT: buffer_store_short v2, v[0:1], s[4:7], 0 addr64
; SI-NEXT: s_waitcnt vmcnt(0) expcnt(0)
; SI-NEXT: s_setpc_b64 s[30:31]
- %y = mul i16 %x, bitcast (half 0xH3118 to i16)
+ %y = mul i16 %x, bitcast (half f0x3118 to i16)
store i16 %y, ptr addrspace(1) %out
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/immv216.ll b/llvm/test/CodeGen/AMDGPU/immv216.ll
index 342d7b0237118d..e32c35829c2d70 100644
--- a/llvm/test/CodeGen/AMDGPU/immv216.ll
+++ b/llvm/test/CodeGen/AMDGPU/immv216.ll
@@ -97,7 +97,7 @@ define amdgpu_kernel void @store_inline_imm_m_4.0_v2f16(ptr addrspace(1) %out) #
; GCN: v_mov_b32_e32 [[REG:v[0-9]+]], 0x31183118 ; encoding
; GCN: buffer_store_{{dword|b32}} [[REG]]
define amdgpu_kernel void @store_inline_imm_inv_2pi_v2f16(ptr addrspace(1) %out) #0 {
- store <2 x half> <half 0xH3118, half 0xH3118>, ptr addrspace(1) %out
+ store <2 x half> <half f0x3118, half f0x3118>, ptr addrspace(1) %out
ret void
}
@@ -105,7 +105,7 @@ define amdgpu_kernel void @store_inline_imm_inv_2pi_v2f16(ptr addrspace(1) %out)
; GCN: v_mov_b32_e32 [[REG:v[0-9]+]], 0xb118b118 ; encoding
; GCN: buffer_store_{{dword|b32}} [[REG]]
define amdgpu_kernel void @store_inline_imm_m_inv_2pi_v2f16(ptr addrspace(1) %out) #0 {
- store <2 x half> <half 0xHB118, half 0xHB118>, ptr addrspace(1) %out
+ store <2 x half> <half f0xB118, half f0xB118>, ptr addrspace(1) %out
ret void
}
@@ -405,7 +405,7 @@ define amdgpu_kernel void @commute_add_literal_v2f16(ptr addrspace(1) %out, ptr
; VI: v_or_b32
; VI: buffer_store_dword
define amdgpu_kernel void @add_inline_imm_1_v2f16(ptr addrspace(1) %out, <2 x half> %x) #0 {
- %y = fadd <2 x half> %x, <half 0xH0001, half 0xH0001>
+ %y = fadd <2 x half> %x, <half f0x0001, half f0x0001>
store <2 x half> %y, ptr addrspace(1) %out
ret void
}
@@ -431,7 +431,7 @@ define amdgpu_kernel void @add_inline_imm_1_v2f16(ptr addrspace(1) %out, <2 x ha
; VI: v_or_b32
; VI: buffer_store_dword
define amdgpu_kernel void @add_inline_imm_2_v2f16(ptr addrspace(1) %out, <2 x half> %x) #0 {
- %y = fadd <2 x half> %x, <half 0xH0002, half 0xH0002>
+ %y = fadd <2 x half> %x, <half f0x0002, half f0x0002>
store <2 x half> %y, ptr addrspace(1) %out
ret void
}
@@ -457,7 +457,7 @@ define amdgpu_kernel void @add_inline_imm_2_v2f16(ptr addrspace(1) %out, <2 x ha
; VI: v_or_b32
; VI: buffer_store_dword
define amdgpu_kernel void @add_inline_imm_16_v2f16(ptr addrspace(1) %out, <2 x half> %x) #0 {
- %y = fadd <2 x half> %x, <half 0xH0010, half 0xH0010>
+ %y = fadd <2 x half> %x, <half f0x0010, half f0x0010>
store <2 x half> %y, ptr addrspace(1) %out
ret void
}
@@ -546,7 +546,7 @@ define amdgpu_kernel void @add_inline_imm_neg_16_v2f16(ptr addrspace(1) %out, <2
; VI: v_or_b32
; VI: buffer_store_dword
define amdgpu_kernel void @add_inline_imm_63_v2f16(ptr addrspace(1) %out, <2 x half> %x) #0 {
- %y = fadd <2 x half> %x, <half 0xH003F, half 0xH003F>
+ %y = fadd <2 x half> %x, <half f0x003F, half f0x003F>
store <2 x half> %y, ptr addrspace(1) %out
ret void
}
@@ -571,7 +571,7 @@ define amdgpu_kernel void @add_inline_imm_63_v2f16(ptr addrspace(1) %out, <2 x h
; VI: v_or_b32
; VI: buffer_store_dword
define amdgpu_kernel void @add_inline_imm_64_v2f16(ptr addrspace(1) %out, <2 x half> %x) #0 {
- %y = fadd <2 x half> %x, <half 0xH0040, half 0xH0040>
+ %y = fadd <2 x half> %x, <half f0x0040, half f0x0040>
store <2 x half> %y, ptr addrspace(1) %out
ret void
}
@@ -661,7 +661,7 @@ define <2 x i16> @mul_inline_imm_neg_4.0_v2i16(<2 x i16> %x) {
; GFX10: v_pk_mul_lo_u16 v0, 0x3118, v0 op_sel_hi:[0,1] ; encoding: [0x{{[0-9a-f]+}},0x{{[0-9a-f]+}},0x{{[0-9a-f]+}},0x{{[0-9a-f]+}},0xff,0x{{[0-9a-f]+}},0x{{[0-9a-f]+}},0x{{[0-9a-f]+}},0x18,0x31,0x00,0x00]
define <2 x i16> @mul_inline_imm_inv2pi_v2i16(<2 x i16> %x) {
- %y = mul <2 x i16> %x, bitcast (<2 x half> <half 0xH3118, half 0xH3118> to <2 x i16>)
+ %y = mul <2 x i16> %x, bitcast (<2 x half> <half f0x3118, half f0x3118> to <2 x i16>)
ret <2 x i16> %y
}
diff --git a/llvm/test/CodeGen/AMDGPU/inline-constraints.ll b/llvm/test/CodeGen/AMDGPU/inline-constraints.ll
index 7bd6b037386b04..97fe7ab8e1aa4e 100644
--- a/llvm/test/CodeGen/AMDGPU/inline-constraints.ll
+++ b/llvm/test/CodeGen/AMDGPU/inline-constraints.ll
@@ -112,14 +112,14 @@ define i32 @inline_A_constraint_H3() {
; NOSI: error: invalid operand for inline asm constraint 'A'
; VI-LABEL: {{^}}inline_A_constraint_H4:
define i32 @inline_A_constraint_H4() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(half 0xH3118)
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(half f0x3118)
ret i32 %v0
}
; NOSI: error: invalid operand for inline asm constraint 'A'
; VI-LABEL: {{^}}inline_A_constraint_H5:
define i32 @inline_A_constraint_H5() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half 0xH3118 to i16))
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half f0x3118 to i16))
ret i32 %v0
}
@@ -132,13 +132,13 @@ define i32 @inline_A_constraint_H6() {
; NOGCN: error: invalid operand for inline asm constraint 'A'
define i32 @inline_A_constraint_H7() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half 0xH3119 to i16))
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half f0x3119 to i16))
ret i32 %v0
}
; NOGCN: error: invalid operand for inline asm constraint 'A'
define i32 @inline_A_constraint_H8() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half 0xH3117 to i16))
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,A"(i16 bitcast (half f0x3117 to i16))
ret i32 %v0
}
@@ -979,14 +979,14 @@ define i32 @inline_DA_constraint_H3() {
; NOSI: error: invalid operand for inline asm constraint 'DA'
; VI-LABEL: {{^}}inline_DA_constraint_H4:
define i32 @inline_DA_constraint_H4() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(half 0xH3118)
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(half f0x3118)
ret i32 %v0
}
; NOSI: error: invalid operand for inline asm constraint 'DA'
; VI-LABEL: {{^}}inline_DA_constraint_H5:
define i32 @inline_DA_constraint_H5() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(i16 bitcast (half 0xH3118 to i16))
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(i16 bitcast (half f0x3118 to i16))
ret i32 %v0
}
@@ -999,7 +999,7 @@ define i32 @inline_DA_constraint_H6() {
; NOGCN: error: invalid operand for inline asm constraint 'DA'
define i32 @inline_DA_constraint_H7() {
- %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(i16 bitcast (half 0xH3119 to i16))
+ %v0 = tail call i32 asm "v_mov_b32 $0, $1", "=v,^DA"(i16 bitcast (half f0x3119 to i16))
ret i32 %v0
}
diff --git a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll
index 48a168b4bfbe71..ef8fa9fa29c1a5 100644
--- a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll
+++ b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll
@@ -259,7 +259,7 @@ define amdgpu_kernel void @v_insertelement_v2bf16_0_inlineimm(ptr addrspace(1) %
%in.gep = getelementptr inbounds <2 x bfloat>, ptr addrspace(1) %in, i64 %tid.ext
%out.gep = getelementptr inbounds <2 x bfloat>, ptr addrspace(1) %out, i64 %tid.ext
%vec = load <2 x bfloat>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x bfloat> %vec, bfloat 0xR0035, i32 0
+ %vecins = insertelement <2 x bfloat> %vec, bfloat f0x0035, i32 0
store <2 x bfloat> %vecins, ptr addrspace(1) %out.gep
ret void
}
@@ -401,7 +401,7 @@ define amdgpu_kernel void @v_insertelement_v2bf16_1_inlineimm(ptr addrspace(1) %
%in.gep = getelementptr inbounds <2 x bfloat>, ptr addrspace(1) %in, i64 %tid.ext
%out.gep = getelementptr inbounds <2 x bfloat>, ptr addrspace(1) %out, i64 %tid.ext
%vec = load <2 x bfloat>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x bfloat> %vec, bfloat 0xR0023, i32 1
+ %vecins = insertelement <2 x bfloat> %vec, bfloat f0x0023, i32 1
store <2 x bfloat> %vecins, ptr addrspace(1) %out.gep
ret void
}
@@ -500,7 +500,7 @@ define amdgpu_kernel void @v_insertelement_v2bf16_dynamic_vgpr(ptr addrspace(1)
%out.gep = getelementptr inbounds <2 x bfloat>, ptr addrspace(1) %out, i64 %tid.ext
%idx = load i32, ptr addrspace(1) %idx.gep
%vec = load <2 x bfloat>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x bfloat> %vec, bfloat 0xR1234, i32 %idx
+ %vecins = insertelement <2 x bfloat> %vec, bfloat f0x1234, i32 %idx
store <2 x bfloat> %vecins, ptr addrspace(1) %out.gep
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll
index d09af8fd2ac954..bfa05e77218e8d 100644
--- a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll
+++ b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll
@@ -1151,7 +1151,7 @@ define amdgpu_kernel void @v_insertelement_v2f16_0_inlineimm(ptr addrspace(1) %o
%in.gep = getelementptr inbounds <2 x half>, ptr addrspace(1) %in, i64 %tid.ext
%out.gep = getelementptr inbounds <2 x half>, ptr addrspace(1) %out, i64 %tid.ext
%vec = load <2 x half>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x half> %vec, half 0xH0035, i32 0
+ %vecins = insertelement <2 x half> %vec, half f0x0035, i32 0
store <2 x half> %vecins, ptr addrspace(1) %out.gep
ret void
}
@@ -1295,7 +1295,7 @@ define amdgpu_kernel void @v_insertelement_v2f16_1_inlineimm(ptr addrspace(1) %o
%in.gep = getelementptr inbounds <2 x half>, ptr addrspace(1) %in, i64 %tid.ext
%out.gep = getelementptr inbounds <2 x half>, ptr addrspace(1) %out, i64 %tid.ext
%vec = load <2 x half>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x half> %vec, half 0xH0023, i32 1
+ %vecins = insertelement <2 x half> %vec, half f0x0023, i32 1
store <2 x half> %vecins, ptr addrspace(1) %out.gep
ret void
}
@@ -1566,7 +1566,7 @@ define amdgpu_kernel void @v_insertelement_v2f16_dynamic_vgpr(ptr addrspace(1) %
%out.gep = getelementptr inbounds <2 x half>, ptr addrspace(1) %out, i64 %tid.ext
%idx = load i32, ptr addrspace(1) %idx.gep
%vec = load <2 x half>, ptr addrspace(1) %in.gep
- %vecins = insertelement <2 x half> %vec, half 0xH1234, i32 %idx
+ %vecins = insertelement <2 x half> %vec, half f0x1234, i32 %idx
store <2 x half> %vecins, ptr addrspace(1) %out.gep
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll
index 004a720b9ab486..0e6c8db0842db7 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll
@@ -877,7 +877,7 @@ define amdgpu_ps void @wqm_deriv(<2 x float> %input, float %arg, i32 %index) {
br label %.continue1
.continue1:
- call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true) #3
+ call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true) #3
ret void
}
@@ -1176,7 +1176,7 @@ define amdgpu_ps void @wqm_deriv_loop(<2 x float> %input, float %arg, i32 %index
br i1 %loop.cond, label %.continue0, label %.return
.return:
- call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true) #3
+ call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true) #3
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/mad-mix.ll b/llvm/test/CodeGen/AMDGPU/mad-mix.ll
index b520dd1060ec8c..091467803bb46b 100644
--- a/llvm/test/CodeGen/AMDGPU/mad-mix.ll
+++ b/llvm/test/CodeGen/AMDGPU/mad-mix.ll
@@ -1109,7 +1109,7 @@ define float @v_mad_mix_f32_f16lo_f16lo_cvtf16imminv2pi(half %src0, half %src1)
; GISEL-CI-NEXT: s_setpc_b64 s[30:31]
%src0.ext = fpext half %src0 to float
%src1.ext = fpext half %src1 to float
- %src2 = fpext half 0xH3118 to float
+ %src2 = fpext half f0x3118 to float
%result = tail call float @llvm.fmuladd.f32(float %src0.ext, float %src1.ext, float %src2)
ret float %result
}
@@ -1210,7 +1210,7 @@ define float @v_mad_mix_f32_f16lo_f16lo_cvtf16imm63(half %src0, half %src1) #0 {
; GISEL-CI-NEXT: s_setpc_b64 s[30:31]
%src0.ext = fpext half %src0 to float
%src1.ext = fpext half %src1 to float
- %src2 = fpext half 0xH003F to float
+ %src2 = fpext half f0x003F to float
%result = tail call float @llvm.fmuladd.f32(float %src0.ext, float %src1.ext, float %src2)
ret float %result
}
@@ -1481,7 +1481,7 @@ define <2 x float> @v_mad_mix_v2f32_cvtf16imminv2pi(<2 x half> %src0, <2 x half>
; GISEL-CI-NEXT: s_setpc_b64 s[30:31]
%src0.ext = fpext <2 x half> %src0 to <2 x float>
%src1.ext = fpext <2 x half> %src1 to <2 x float>
- %src2 = fpext <2 x half> <half 0xH3118, half 0xH3118> to <2 x float>
+ %src2 = fpext <2 x half> <half f0x3118, half f0x3118> to <2 x float>
%result = tail call <2 x float> @llvm.fmuladd.v2f32(<2 x float> %src0.ext, <2 x float> %src1.ext, <2 x float> %src2)
ret <2 x float> %result
}
@@ -1616,7 +1616,7 @@ define <2 x float> @v_mad_mix_v2f32_f32imminv2pi(<2 x half> %src0, <2 x half> %s
; GISEL-CI-NEXT: s_setpc_b64 s[30:31]
%src0.ext = fpext <2 x half> %src0 to <2 x float>
%src1.ext = fpext <2 x half> %src1 to <2 x float>
- %src2 = fpext <2 x half> <half 0xH3118, half 0xH3118> to <2 x float>
+ %src2 = fpext <2 x half> <half f0x3118, half f0x3118> to <2 x float>
%result = tail call <2 x float> @llvm.fmuladd.v2f32(<2 x float> %src0.ext, <2 x float> %src1.ext, <2 x float> <float 0x3FC45F3060000000, float 0x3FC45F3060000000>)
ret <2 x float> %result
}
diff --git a/llvm/test/CodeGen/AMDGPU/mai-inline.ll b/llvm/test/CodeGen/AMDGPU/mai-inline.ll
index ee571651265764..a31f64683a7e5c 100644
--- a/llvm/test/CodeGen/AMDGPU/mai-inline.ll
+++ b/llvm/test/CodeGen/AMDGPU/mai-inline.ll
@@ -66,7 +66,7 @@ bb:
define amdgpu_kernel void @v_mfma_f32_4x4x4f16_aaaa(ptr addrspace(1) %arg) {
bb:
%in.1 = load <4 x float>, ptr addrspace(1) %arg
- %mai.1 = tail call <4 x float> asm "v_mfma_f32_4x4x4f16 $0, $1, $2, $3", "=a,a,a,a"(<4 x half> <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>, <4 x half> <half 0xH03FF, half 0xH03FF, half 0xH03FF, half 0xH03FF>, <4 x float> %in.1)
+ %mai.1 = tail call <4 x float> asm "v_mfma_f32_4x4x4f16 $0, $1, $2, $3", "=a,a,a,a"(<4 x half> <half f0x3800, half f0x3800, half f0x3800, half f0x3800>, <4 x half> <half f0x03FF, half f0x03FF, half f0x03FF, half f0x03FF>, <4 x float> %in.1)
store <4 x float> %mai.1, ptr addrspace(1) %arg
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll b/llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll
index 0f67a404972aaf..73e7572fb2b282 100644
--- a/llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll
+++ b/llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll
@@ -137,7 +137,7 @@ define amdgpu_cs void @mixed_vmem_types(i32 inreg %globalTable, i32 inreg %perSh
%i15 = getelementptr i8, ptr addrspace(4) %i6, i64 32
%i16 = load <8 x i32>, ptr addrspace(4) %i15, align 32
%i17 = load <4 x i32>, ptr addrspace(4) %i6, align 16
- %i18 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f16.v8i32.v4i32(i32 1, half 0xHBC00, half 0xHBC00, <8 x i32> %i16, <4 x i32> %i17, i1 false, i32 0, i32 0)
+ %i18 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f16.v8i32.v4i32(i32 1, half f0xBC00, half f0xBC00, <8 x i32> %i16, <4 x i32> %i17, i1 false, i32 0, i32 0)
%i19 = fcmp oeq float %i18, 0.000000e+00
%i20 = call i32 @llvm.amdgcn.raw.buffer.load.i32(<4 x i32> %i14, i32 0, i32 0, i32 0)
%.not = icmp eq i32 %i20, 2752
@@ -146,7 +146,7 @@ define amdgpu_cs void @mixed_vmem_types(i32 inreg %globalTable, i32 inreg %perSh
%i22 = getelementptr i8, ptr addrspace(4) %i3, i64 16
%i23 = load <8 x i32>, ptr addrspace(4) %i22, align 32
%i24 = load <4 x i32>, ptr addrspace(4) %i3, align 16
- %i25 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f16.v8i32.v4i32(i32 1, half 0xHBC00, half 0xHBC00, <8 x i32> %i23, <4 x i32> %i24, i1 false, i32 0, i32 0)
+ %i25 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f16.v8i32.v4i32(i32 1, half f0xBC00, half f0xBC00, <8 x i32> %i23, <4 x i32> %i24, i1 false, i32 0, i32 0)
%i26 = fcmp oeq float %i25, 1.000000e+00
%i27 = call i32 @llvm.amdgcn.raw.buffer.load.i32(<4 x i32> %i10, i32 0, i32 0, i32 0)
%.not2 = icmp eq i32 %i27, 2752
diff --git a/llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll b/llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll
index 10d08032bf59a5..8a12ad6a0df68a 100644
--- a/llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll
+++ b/llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll
@@ -728,7 +728,7 @@ bb5: ; preds = %bb3
; IR-NEXT: br i1 false, label %DummyReturnBlock, label %[[LOOP]]
; IR: [[EXP]]:
-; IR-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true)
+; IR-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true)
; IR-NEXT: ret void
; IR: DummyReturnBlock:
@@ -743,7 +743,7 @@ loop: ; preds = %loop, %.entry
br label %loop
bb27: ; preds = %.entry
- call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half 0xH3C00, half 0xH0000>, <2 x half> <half 0xH0000, half 0xH3C00>, i1 true, i1 true)
+ call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 15, <2 x half> <half f0x3C00, half f0x0000>, <2 x half> <half f0x0000, half f0x3C00>, i1 true, i1 true)
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/pack.v2f16.ll b/llvm/test/CodeGen/AMDGPU/pack.v2f16.ll
index da6120812ac1da..b1d8e5c9c6b4d2 100644
--- a/llvm/test/CodeGen/AMDGPU/pack.v2f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/pack.v2f16.ll
@@ -102,7 +102,7 @@ define amdgpu_kernel void @s_pack_v2f16_imm_lo(ptr addrspace(4) %in1) #0 {
%val1 = load i32, ptr addrspace(4) %in1
%hi.i = trunc i32 %val1 to i16
%hi = bitcast i16 %hi.i to half
- %vec.0 = insertelement <2 x half> undef, half 0xH1234, i32 0
+ %vec.0 = insertelement <2 x half> undef, half f0x1234, i32 0
%vec.1 = insertelement <2 x half> %vec.0, half %hi, i32 1
%vec.i32 = bitcast <2 x half> %vec.1 to i32
@@ -152,7 +152,7 @@ define amdgpu_kernel void @s_pack_v2f16_imm_hi(ptr addrspace(4) %in0) #0 {
%lo.i = trunc i32 %val0 to i16
%lo = bitcast i16 %lo.i to half
%vec.0 = insertelement <2 x half> undef, half %lo, i32 0
- %vec.1 = insertelement <2 x half> %vec.0, half 0xH1234, i32 1
+ %vec.1 = insertelement <2 x half> %vec.0, half f0x1234, i32 1
%vec.i32 = bitcast <2 x half> %vec.1 to i32
call void asm sideeffect "; use $0", "s"(i32 %vec.i32) #0
@@ -376,7 +376,7 @@ define amdgpu_kernel void @v_pack_v2f16_imm_lo(ptr addrspace(1) %in1) #0 {
%val1 = load volatile i32, ptr addrspace(1) %in1.gep
%hi.i = trunc i32 %val1 to i16
%hi = bitcast i16 %hi.i to half
- %vec.0 = insertelement <2 x half> undef, half 0xH1234, i32 0
+ %vec.0 = insertelement <2 x half> undef, half f0x1234, i32 0
%vec.1 = insertelement <2 x half> %vec.0, half %hi, i32 1
%vec.i32 = bitcast <2 x half> %vec.1 to i32
call void asm sideeffect "; use $0", "v"(i32 %vec.i32) #0
@@ -501,7 +501,7 @@ define amdgpu_kernel void @v_pack_v2f16_imm_hi(ptr addrspace(1) %in0) #0 {
%lo.i = trunc i32 %val0 to i16
%lo = bitcast i16 %lo.i to half
%vec.0 = insertelement <2 x half> undef, half %lo, i32 0
- %vec.1 = insertelement <2 x half> %vec.0, half 0xH1234, i32 1
+ %vec.1 = insertelement <2 x half> %vec.0, half f0x1234, i32 1
%vec.i32 = bitcast <2 x half> %vec.1 to i32
call void asm sideeffect "; use $0", "v"(i32 %vec.i32) #0
ret void
@@ -624,7 +624,7 @@ define amdgpu_kernel void @v_pack_v2f16_inline_imm_hi(ptr addrspace(1) %in0) #0
%lo.i = trunc i32 %val0 to i16
%lo = bitcast i16 %lo.i to half
%vec.0 = insertelement <2 x half> undef, half %lo, i32 0
- %vec.1 = insertelement <2 x half> %vec.0, half 0xH0040, i32 1
+ %vec.1 = insertelement <2 x half> %vec.0, half f0x0040, i32 1
%vec.i32 = bitcast <2 x half> %vec.1 to i32
call void asm sideeffect "; use $0", "v"(i32 %vec.i32) #0
ret void
diff --git a/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll b/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
index b0590f6f83ab0d..017b84201cdada 100644
--- a/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
+++ b/llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll
@@ -10,7 +10,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH0000, half 0xH3C00>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x0000, half f0x3C00>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -23,7 +23,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH3C00, half 0xH0000>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x3C00, half f0x0000>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -36,7 +36,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH3C00, half 0xH3C00>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x3C00, half f0x3C00>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -49,7 +49,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH0000, half 0xHBC00>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x0000, half f0xBC00>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -62,7 +62,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xHBC00, half 0xH0000>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0xBC00, half f0x0000>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -75,7 +75,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xHBC00, half 0xHBC00>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0xBC00, half f0xBC00>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -88,7 +88,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH0000, half 0xH0000>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x0000, half f0x0000>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -103,7 +103,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH0000, half 0xH41C8>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x0000, half f0x41C8>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
@@ -133,7 +133,7 @@ bb:
%tmp1 = zext i32 %tmp to i64
%tmp2 = getelementptr inbounds <2 x half>, ptr addrspace(1) %arg, i64 %tmp1
%tmp3 = load <2 x half>, ptr addrspace(1) %tmp2, align 4
- %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half 0xH42CA, half 0xH41C8>)
+ %tmp4 = tail call <2 x half> @llvm.maxnum.v2f16(<2 x half> %tmp3, <2 x half> <half f0x42CA, half f0x41C8>)
store <2 x half> %tmp4, ptr addrspace(1) %tmp2, align 4
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll b/llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll
index e7b405d7d92707..357fc59c0912f1 100644
--- a/llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll
+++ b/llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll
@@ -454,7 +454,7 @@ define bfloat @atomicrmw_fadd_private_bf16(ptr addrspace(5) %ptr) {
; IR-LABEL: define bfloat @atomicrmw_fadd_private_bf16(
; IR-SAME: ptr addrspace(5) [[PTR:%.*]]) #[[ATTR0]] {
; IR-NEXT: [[TMP1:%.*]] = load bfloat, ptr addrspace(5) [[PTR]], align 2
-; IR-NEXT: [[NEW:%.*]] = fadd bfloat [[TMP1]], 0xR4000
+; IR-NEXT: [[NEW:%.*]] = fadd bfloat [[TMP1]], f0x4000
; IR-NEXT: store bfloat [[NEW]], ptr addrspace(5) [[PTR]], align 2
; IR-NEXT: ret bfloat [[TMP1]]
;
diff --git a/llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll b/llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll
index 99a4bb83c0c442..06072a3fb1e970 100644
--- a/llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll
+++ b/llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll
@@ -70,7 +70,7 @@ entry:
; GCN-DAG: s_mov_b32 s[[SL:[0-9]+]], 0x40003c00
; GCN: v_lshrrev_b64 v[{{[0-9:]+}}], v{{[0-9]+}}, s[[[SL]]:[[SH]]]
-; OPT: %0 = extractelement <4 x half> <half 0xH3C00, half 0xH4000, half 0xH4200, half 0xH4400>, i32 %sel2
+; OPT: %0 = extractelement <4 x half> <half f0x3C00, half f0x4000, half f0x4200, half f0x4400>, i32 %sel2
; OPT: store half %0, ptr addrspace(1) %out, align 2
define amdgpu_kernel void @half4_alloca_store4(ptr addrspace(1) %out, ptr addrspace(3) %dummy_lds) {
@@ -95,7 +95,7 @@ entry:
; GCN-NOT: buffer_
; GCN: s_mov_b64 s[{{[0-9:]+}}], 0xffff
-; OPT: %0 = insertelement <4 x half> undef, half 0xH3C00, i32 %sel2
+; OPT: %0 = insertelement <4 x half> undef, half f0x3C00, i32 %sel2
; OPT: store <4 x half> %0, ptr addrspace(1) %out, align 2
define amdgpu_kernel void @half4_alloca_load4(ptr addrspace(1) %out, ptr addrspace(3) %dummy_lds) {
diff --git a/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll b/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll
index 7c1da18de70f83..6c166dcda96800 100644
--- a/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll
@@ -779,7 +779,7 @@ define half @add_select_fneg_inv2pi_f16(i32 %c, half %x, half %y) {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%cmp = icmp eq i32 %c, 0
%fneg.x = fneg half %x
- %select = select i1 %cmp, half %fneg.x, half 0xH3118
+ %select = select i1 %cmp, half %fneg.x, half f0x3118
%add = fadd half %select, %y
ret half %add
}
@@ -817,7 +817,7 @@ define half @add_select_fneg_neginv2pi_f16(i32 %c, half %x, half %y) {
; GFX11-NEXT: s_setpc_b64 s[30:31]
%cmp = icmp eq i32 %c, 0
%fneg.x = fneg half %x
- %select = select i1 %cmp, half %fneg.x, half 0xHB118
+ %select = select i1 %cmp, half %fneg.x, half f0xB118
%add = fadd half %select, %y
ret half %add
}
diff --git a/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll b/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll
index d2bb971b680307..5a6a7c0e5d2fcd 100644
--- a/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll
@@ -1475,7 +1475,7 @@ define <2 x half> @add_select_fneg_inv2pi_v2f16(<2 x i32> %c, <2 x half> %x, <2
; GFX11-NEXT: s_setpc_b64 s[30:31]
%cmp = icmp eq <2 x i32> %c, zeroinitializer
%fneg.x = fneg <2 x half> %x
- %select = select <2 x i1> %cmp, <2 x half> %fneg.x, <2 x half> <half 0xH3118, half 0xH3118>
+ %select = select <2 x i1> %cmp, <2 x half> %fneg.x, <2 x half> <half f0x3118, half f0x3118>
%add = fadd <2 x half> %select, %y
ret <2 x half> %add
}
@@ -1544,7 +1544,7 @@ define <2 x half> @add_select_fneg_neginv2pi_v2f16(<2 x i32> %c, <2 x half> %x,
; GFX11-NEXT: s_setpc_b64 s[30:31]
%cmp = icmp eq <2 x i32> %c, zeroinitializer
%fneg.x = fneg <2 x half> %x
- %select = select <2 x i1> %cmp, <2 x half> %fneg.x, <2 x half> <half 0xHB118, half 0xHB118>
+ %select = select <2 x i1> %cmp, <2 x half> %fneg.x, <2 x half> <half f0xB118, half f0xB118>
%add = fadd <2 x half> %select, %y
ret <2 x half> %add
}
diff --git a/llvm/test/CodeGen/AMDGPU/select.f16.ll b/llvm/test/CodeGen/AMDGPU/select.f16.ll
index 572026da79646c..a1d9d3590242b0 100644
--- a/llvm/test/CodeGen/AMDGPU/select.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/select.f16.ll
@@ -239,7 +239,7 @@ entry:
%b.val = load volatile half, ptr addrspace(1) %b
%c.val = load volatile half, ptr addrspace(1) %c
%d.val = load volatile half, ptr addrspace(1) %d
- %fcmp = fcmp olt half 0xH3800, %b.val
+ %fcmp = fcmp olt half f0x3800, %b.val
%r.val = select i1 %fcmp, half %c.val, half %d.val
store half %r.val, ptr addrspace(1) %r
ret void
@@ -350,7 +350,7 @@ entry:
%a.val = load volatile half, ptr addrspace(1) %a
%c.val = load volatile half, ptr addrspace(1) %c
%d.val = load volatile half, ptr addrspace(1) %d
- %fcmp = fcmp olt half %a.val, 0xH3800
+ %fcmp = fcmp olt half %a.val, f0x3800
%r.val = select i1 %fcmp, half %c.val, half %d.val
store half %r.val, ptr addrspace(1) %r
ret void
@@ -463,7 +463,7 @@ entry:
%b.val = load volatile half, ptr addrspace(1) %b
%d.val = load volatile half, ptr addrspace(1) %d
%fcmp = fcmp olt half %a.val, %b.val
- %r.val = select i1 %fcmp, half 0xH3800, half %d.val
+ %r.val = select i1 %fcmp, half f0x3800, half %d.val
store half %r.val, ptr addrspace(1) %r
ret void
}
@@ -575,7 +575,7 @@ entry:
%b.val = load volatile half, ptr addrspace(1) %b
%c.val = load volatile half, ptr addrspace(1) %c
%fcmp = fcmp olt half %a.val, %b.val
- %r.val = select i1 %fcmp, half %c.val, half 0xH3800
+ %r.val = select i1 %fcmp, half %c.val, half f0x3800
store half %r.val, ptr addrspace(1) %r
ret void
}
@@ -872,7 +872,7 @@ entry:
%b.val = load <2 x half>, ptr addrspace(1) %b
%c.val = load <2 x half>, ptr addrspace(1) %c
%d.val = load <2 x half>, ptr addrspace(1) %d
- %fcmp = fcmp olt <2 x half> <half 0xH3800, half 0xH3900>, %b.val
+ %fcmp = fcmp olt <2 x half> <half f0x3800, half f0x3900>, %b.val
%r.val = select <2 x i1> %fcmp, <2 x half> %c.val, <2 x half> %d.val
store <2 x half> %r.val, ptr addrspace(1) %r
ret void
@@ -1011,7 +1011,7 @@ entry:
%a.val = load <2 x half>, ptr addrspace(1) %a
%c.val = load <2 x half>, ptr addrspace(1) %c
%d.val = load <2 x half>, ptr addrspace(1) %d
- %fcmp = fcmp olt <2 x half> %a.val, <half 0xH3800, half 0xH3900>
+ %fcmp = fcmp olt <2 x half> %a.val, <half f0x3800, half f0x3900>
%r.val = select <2 x i1> %fcmp, <2 x half> %c.val, <2 x half> %d.val
store <2 x half> %r.val, ptr addrspace(1) %r
ret void
@@ -1153,7 +1153,7 @@ entry:
%b.val = load <2 x half>, ptr addrspace(1) %b
%d.val = load <2 x half>, ptr addrspace(1) %d
%fcmp = fcmp olt <2 x half> %a.val, %b.val
- %r.val = select <2 x i1> %fcmp, <2 x half> <half 0xH3800, half 0xH3900>, <2 x half> %d.val
+ %r.val = select <2 x i1> %fcmp, <2 x half> <half f0x3800, half f0x3900>, <2 x half> %d.val
store <2 x half> %r.val, ptr addrspace(1) %r
ret void
}
@@ -1294,7 +1294,7 @@ entry:
%b.val = load <2 x half>, ptr addrspace(1) %b
%c.val = load <2 x half>, ptr addrspace(1) %c
%fcmp = fcmp olt <2 x half> %a.val, %b.val
- %r.val = select <2 x i1> %fcmp, <2 x half> %c.val, <2 x half> <half 0xH3800, half 0xH3900>
+ %r.val = select <2 x i1> %fcmp, <2 x half> %c.val, <2 x half> <half f0x3800, half f0x3900>
store <2 x half> %r.val, ptr addrspace(1) %r
ret void
}
diff --git a/llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll b/llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll
index 4993df7e1ba487..8cc884776aa9c7 100644
--- a/llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll
+++ b/llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll
@@ -430,7 +430,7 @@ declare <2 x half> @_Z3powDv2_DhS_(<2 x half>, <2 x half>)
; GCN-LABEL: define half @test_pow_fast_f16__y_13(half %x)
; GCN: %__fabs = tail call fast half @llvm.fabs.f16(half %x)
; GCN: %__log2 = tail call fast half @llvm.log2.f16(half %__fabs)
-; GCN: %__ylogx = fmul fast half %__log2, 0xH4A80
+; GCN: %__ylogx = fmul fast half %__log2, f0x4A80
; GCN: %__exp2 = tail call fast half @llvm.exp2.f16(half %__ylogx)
; GCN: %1 = tail call half @llvm.copysign.f16(half %__exp2, half %x)
define half @test_pow_fast_f16__y_13(half %x) {
@@ -441,7 +441,7 @@ define half @test_pow_fast_f16__y_13(half %x) {
; GCN-LABEL: define <2 x half> @test_pow_fast_v2f16__y_13(<2 x half> %x)
; GCN: %__fabs = tail call fast <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
; GCN: %__log2 = tail call fast <2 x half> @llvm.log2.v2f16(<2 x half> %__fabs)
-; GCN: %__ylogx = fmul fast <2 x half> %__log2, splat (half 0xH4A80)
+; GCN: %__ylogx = fmul fast <2 x half> %__log2, splat (half f0x4A80)
; GCN: %__exp2 = tail call fast <2 x half> @llvm.exp2.v2f16(<2 x half> %__ylogx)
; GCN: %1 = tail call <2 x half> @llvm.copysign.v2f16(<2 x half> %__exp2, <2 x half> %x)
define <2 x half> @test_pow_fast_v2f16__y_13(<2 x half> %x) {
diff --git a/llvm/test/CodeGen/ARM/arm-half-promote.ll b/llvm/test/CodeGen/ARM/arm-half-promote.ll
index e1ab75b2ac7f16..8546a22c8ed745 100644
--- a/llvm/test/CodeGen/ARM/arm-half-promote.ll
+++ b/llvm/test/CodeGen/ARM/arm-half-promote.ll
@@ -120,7 +120,7 @@ define void @extract_insert(ptr %dst) optnone noinline {
; CHECK: vcvtb.f16.f32 s0, s0
; CHECK: vmov r1, s0
; CHECK: strh r1, [r0]
- %splat.splatinsert = insertelement <1 x half> zeroinitializer, half 0xH0000, i32 0
+ %splat.splatinsert = insertelement <1 x half> zeroinitializer, half f0x0000, i32 0
br label %next
next:
diff --git a/llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll b/llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll
index 9570c70676dbb4..aa68a7d8de8f20 100644
--- a/llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll
+++ b/llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll
@@ -383,7 +383,7 @@ define dso_local <4 x half> @test_vneg_f16(<4 x half> %a) {
; CHECK-NEXT: vneg.f16 d0, d0
; CHECK-NEXT: bx lr
entry:
- %sub.i = fsub <4 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %a
+ %sub.i = fsub <4 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %a
ret <4 x half> %sub.i
}
@@ -394,7 +394,7 @@ define dso_local <8 x half> @test_vnegq_f16(<8 x half> %a) {
; CHECK-NEXT: vneg.f16 q0, q0
; CHECK-NEXT: bx lr
entry:
- %sub.i = fsub <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %a
+ %sub.i = fsub <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %a
ret <8 x half> %sub.i
}
@@ -1136,7 +1136,7 @@ define dso_local <4 x half> @test_vfms_f16(<4 x half> %a, <4 x half> %b, <4 x ha
; CHECK-NEXT: vfma.f16 d0, d16, d2
; CHECK-NEXT: bx lr
entry:
- %sub.i = fsub <4 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub.i = fsub <4 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%0 = tail call <4 x half> @llvm.fma.v4f16(<4 x half> %sub.i, <4 x half> %c, <4 x half> %a)
ret <4 x half> %0
}
@@ -1148,7 +1148,7 @@ define dso_local <8 x half> @test_vfmsq_f16(<8 x half> %a, <8 x half> %b, <8 x h
; CHECK-NEXT: vfma.f16 q0, q8, q2
; CHECK-NEXT: bx lr
entry:
- %sub.i = fsub <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>, %b
+ %sub.i = fsub <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>, %b
%0 = tail call <8 x half> @llvm.fma.v8f16(<8 x half> %sub.i, <8 x half> %c, <8 x half> %a)
ret <8 x half> %0
}
diff --git a/llvm/test/CodeGen/ARM/bf16-imm.ll b/llvm/test/CodeGen/ARM/bf16-imm.ll
index 7532bbcd09b3ff..0465d2e2ea2698 100644
--- a/llvm/test/CodeGen/ARM/bf16-imm.ll
+++ b/llvm/test/CodeGen/ARM/bf16-imm.ll
@@ -67,7 +67,7 @@ define bfloat @zero() {
; CHECK-HARD-NEXT: @ %bb.1:
; CHECK-HARD-NEXT: .LCPI1_0:
; CHECK-HARD-NEXT: .short 0x0000 @ bfloat 0
- ret bfloat 0xR0000
+ ret bfloat f0x0000
}
define bfloat @bitcast_tenk() {
@@ -133,5 +133,5 @@ define bfloat @minus0() {
; CHECK-HARD-NEXT: @ %bb.1:
; CHECK-HARD-NEXT: .LCPI3_0:
; CHECK-HARD-NEXT: .short 0x8000 @ bfloat -0
- ret bfloat 0xR8000
+ ret bfloat f0x8000
}
diff --git a/llvm/test/CodeGen/ARM/const-load-align-thumb.mir b/llvm/test/CodeGen/ARM/const-load-align-thumb.mir
index 3bab48959cb019..f6f62376ae9cfd 100644
--- a/llvm/test/CodeGen/ARM/const-load-align-thumb.mir
+++ b/llvm/test/CodeGen/ARM/const-load-align-thumb.mir
@@ -6,7 +6,7 @@
define hidden i32 @main() {
entry:
%P5 = alloca half, align 2
- store half 0xH3FE0, ptr %P5, align 2
+ store half f0x3FE0, ptr %P5, align 2
%0 = load half, ptr %P5, align 2
call void @z_bar(half %0)
ret i32 0
@@ -33,7 +33,7 @@ stack:
- { id: 2, type: spill-slot, offset: -8, size: 4, alignment: 4, callee-saved-register: '$r7' }
constants:
- id: 0
- value: half 0xH3FE0
+ value: half f0x3FE0
alignment: 2
machineFunctionInfo: {}
body: |
diff --git a/llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir b/llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir
index bc0b8435840579..cb93c6039c38a3 100644
--- a/llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir
+++ b/llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir
@@ -27,11 +27,11 @@ constants:
-
id: 0
- value: half 0xH5440
+ value: half f0x5440
alignment: 2
-
id: 1
- value: half 0xH5441
+ value: half f0x5441
alignment: 2
machineFunctionInfo: {}
diff --git a/llvm/test/CodeGen/ARM/fp16-bitcast.ll b/llvm/test/CodeGen/ARM/fp16-bitcast.ll
index c1d99248d2f386..fd28689913a956 100644
--- a/llvm/test/CodeGen/ARM/fp16-bitcast.ll
+++ b/llvm/test/CodeGen/ARM/fp16-bitcast.ll
@@ -155,7 +155,7 @@ define half @constcall() {
; CHECK-FP16-HARD-NEXT: vmov.f16 s0, #1.000000e+01
; CHECK-FP16-HARD-NEXT: b ccc
entry:
- %call = tail call fast half @ccc(half 0xH4900)
+ %call = tail call fast half @ccc(half f0x4900)
ret half %call
}
@@ -185,7 +185,7 @@ define half @constret() {
; CHECK-FP16-HARD-NEXT: vmov.f16 s0, #1.000000e+01
; CHECK-FP16-HARD-NEXT: bx lr
entry:
- ret half 0xH4900
+ ret half f0x4900
}
declare half @ccc(half)
diff --git a/llvm/test/CodeGen/ARM/fp16-instructions.ll b/llvm/test/CodeGen/ARM/fp16-instructions.ll
index 7a1d5ddfa301b6..a3c1b91144d427 100644
--- a/llvm/test/CodeGen/ARM/fp16-instructions.ll
+++ b/llvm/test/CodeGen/ARM/fp16-instructions.ll
@@ -197,7 +197,7 @@ entry:
for.cond:
%0 = load half, ptr %f, align 2
- %cmp = fcmp nnan ninf nsz ole half %0, 0xH6800
+ %cmp = fcmp nnan ninf nsz ole half %0, f0x6800
br i1 %cmp, label %for.body, label %for.end
for.body:
@@ -541,7 +541,7 @@ define i32 @movi(i32 %a.coerce) {
entry:
%tmp.0.extract.trunc = trunc i32 %a.coerce to i16
%0 = bitcast i16 %tmp.0.extract.trunc to half
- %add = fadd half %0, 0xHC000
+ %add = fadd half %0, f0xC000
%1 = bitcast half %add to i16
%tmp2.0.insert.ext = zext i16 %1 to i32
ret i32 %tmp2.0.insert.ext
@@ -694,8 +694,8 @@ entry:
; 35. VSELEQ
define half @select_cc1(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz oeq half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz oeq half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc1:
@@ -722,8 +722,8 @@ define half @select_cc1(ptr %a0) {
; 36. VSELGE
define half @select_cc_ge1(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz oge half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz oge half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_ge1:
@@ -745,8 +745,8 @@ define half @select_cc_ge1(ptr %a0) {
define half @select_cc_ge2(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz ole half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ole half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_ge2:
@@ -768,8 +768,8 @@ define half @select_cc_ge2(ptr %a0) {
define half @select_cc_ge3(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz ugt half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ugt half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_ge3:
@@ -791,8 +791,8 @@ define half @select_cc_ge3(ptr %a0) {
define half @select_cc_ge4(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz ult half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ult half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_ge4:
@@ -815,8 +815,8 @@ define half @select_cc_ge4(ptr %a0) {
; 37. VSELGT
define half @select_cc_gt1(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz ogt half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ogt half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_gt1:
@@ -838,8 +838,8 @@ define half @select_cc_gt1(ptr %a0) {
define half @select_cc_gt2(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz uge half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz uge half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_gt2:
@@ -861,8 +861,8 @@ define half @select_cc_gt2(ptr %a0) {
define half @select_cc_gt3(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz ule half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ule half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_gt3:
@@ -884,8 +884,8 @@ define half @select_cc_gt3(ptr %a0) {
define half @select_cc_gt4(ptr %a0) {
%1 = load half, ptr %a0
- %2 = fcmp nsz olt half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz olt half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
ret half %3
; CHECK-LABEL: select_cc_gt4:
@@ -912,8 +912,8 @@ entry:
%tmp.0.extract.trunc = trunc i32 %0 to i16
%1 = bitcast i16 %tmp.0.extract.trunc to half
- %2 = fcmp nsz ueq half %1, 0xH0001
- %3 = select i1 %2, half 0xHC000, half 0xH0002
+ %2 = fcmp nsz ueq half %1, f0x0001
+ %3 = select i1 %2, half f0xC000, half f0x0002
%4 = bitcast half %3 to i16
%tmp4.0.insert.ext = zext i16 %4 to i32
@@ -1017,7 +1017,7 @@ entry:
%S = alloca half, align 2
%tmp.0.extract.trunc = trunc i32 %A.coerce to i16
%0 = bitcast i16 %tmp.0.extract.trunc to half
- store volatile half 0xH3C00, ptr %S, align 2
+ store volatile half f0x3C00, ptr %S, align 2
%S.0.S.0. = load volatile half, ptr %S, align 2
%add = fadd half %S.0.S.0., %0
%1 = bitcast half %add to i16
@@ -1038,10 +1038,10 @@ define i32 @fn1() {
entry:
%coerce = alloca half, align 2
%tmp2 = alloca i32, align 4
- store half 0xH7C00, ptr %coerce, align 2
+ store half f0x7C00, ptr %coerce, align 2
%0 = load i32, ptr %tmp2, align 4
%call = call i32 @fn2(i32 %0)
- store half 0xH7C00, ptr %coerce, align 2
+ store half f0x7C00, ptr %coerce, align 2
%1 = load i32, ptr %tmp2, align 4
%call3 = call i32 @fn3(i32 %1)
ret i32 %call3
diff --git a/llvm/test/CodeGen/ARM/fp16-litpool-arm.mir b/llvm/test/CodeGen/ARM/fp16-litpool-arm.mir
index 8e671c903addad..2e45cef5dea568 100644
--- a/llvm/test/CodeGen/ARM/fp16-litpool-arm.mir
+++ b/llvm/test/CodeGen/ARM/fp16-litpool-arm.mir
@@ -15,11 +15,11 @@
%S = alloca half, align 2
%tmp.0.extract.trunc = trunc i32 %A.coerce to i16
%0 = bitcast i16 %tmp.0.extract.trunc to half
- store volatile half 0xH3C00, ptr %S, align 2
+ store volatile half f0x3C00, ptr %S, align 2
store volatile i64 4242424242424242, ptr %LL, align 8
%1 = call i32 @llvm.arm.space(i32 8920, i32 undef)
%S.0.S.0.570 = load volatile half, ptr %S, align 2
- %add298 = fadd half %S.0.S.0.570, 0xH2E66
+ %add298 = fadd half %S.0.S.0.570, f0x2E66
store volatile half %add298, ptr %S, align 2
%2 = call i32 @llvm.arm.space(i32 1350, i32 undef)
%3 = bitcast half %add298 to i16
@@ -51,7 +51,7 @@ constants:
value: i32 987766
alignment: 4
- id: 2
- value: half 0xH2E66
+ value: half f0x2E66
alignment: 2
#CHECK: B %[[BB4:bb.[0-9]]]
diff --git a/llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir b/llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir
index 03ddd80ed0ead3..11b50aeb127759 100644
--- a/llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir
+++ b/llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir
@@ -17,10 +17,10 @@
%tmp.0.extract.trunc = trunc i32 %A.coerce to i16
%0 = bitcast i16 %tmp.0.extract.trunc to half
store volatile float 4.200000e+01, ptr %F, align 4
- store volatile half 0xH3C00, ptr %S, align 2
+ store volatile half f0x3C00, ptr %S, align 2
%S.0.S.0.142 = load volatile half, ptr %S, align 2
%1 = call i32 @llvm.arm.space(i32 1230, i32 undef)
- %add42 = fadd half %S.0.S.0.142, 0xH2E66
+ %add42 = fadd half %S.0.S.0.142, f0x2E66
store volatile half %add42, ptr %S, align 2
%2 = call i32 @llvm.arm.space(i32 1330, i32 undef)
%S.0.S.0.119 = load volatile half, ptr %S, align 2
@@ -49,7 +49,7 @@ constants:
value: i32 1109917696
alignment: 4
- id: 1
- value: half 0xH2E66
+ value: half f0x2E66
alignment: 2
#CHECK: t2B %[[BB3:bb.[0-9]]]
diff --git a/llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir b/llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir
index bd343ebef26ad4..3bf233bff297d0 100644
--- a/llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir
+++ b/llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir
@@ -14,9 +14,9 @@
define dso_local i32 @CP() #1 {
entry:
%res = alloca half, align 2
- store half 0xH706B, ptr %res, align 2
+ store half f0x706B, ptr %res, align 2
%0 = load half, ptr %res, align 2
- %tobool = fcmp une half %0, 0xH0000
+ %tobool = fcmp une half %0, f0x0000
br i1 %tobool, label %LA, label %END
LA: ; preds = %entry
@@ -71,7 +71,7 @@ stack:
debug-info-location: '' }
constants:
- id: 0
- value: half 0xH706B
+ value: half f0x706B
alignment: 2
isTargetSpecific: false
diff --git a/llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir b/llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir
index 1f8e6b0ad42166..387dd282102342 100644
--- a/llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir
+++ b/llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir
@@ -15,9 +15,9 @@
define dso_local i32 @CP() #1 {
entry:
%res = alloca half, align 2
- store half 0xH706B, ptr %res, align 2
+ store half f0x706B, ptr %res, align 2
%0 = load half, ptr %res, align 2
- %tobool = fcmp une half %0, 0xH0000
+ %tobool = fcmp une half %0, f0x0000
br i1 %tobool, label %LA, label %END
LA: ; preds = %entry
@@ -72,7 +72,7 @@ stack:
debug-info-location: '' }
constants:
- id: 0
- value: half 0xH706B
+ value: half f0x706B
alignment: 2
isTargetSpecific: false
diff --git a/llvm/test/CodeGen/ARM/fp16-no-condition.ll b/llvm/test/CodeGen/ARM/fp16-no-condition.ll
index edfa61f773f9d3..b875ab225fdb7e 100644
--- a/llvm/test/CodeGen/ARM/fp16-no-condition.ll
+++ b/llvm/test/CodeGen/ARM/fp16-no-condition.ll
@@ -59,8 +59,8 @@ entry:
%a = load half, ptr %p, align 2
%b = load half, ptr %p1, align 2
- %aflag = fcmp oeq half %a, 0xH0000
- %bflag = fcmp oeq half %b, 0xH0000
+ %aflag = fcmp oeq half %a, f0x0000
+ %bflag = fcmp oeq half %b, f0x0000
%flag = or i1 %aflag, %bflag
br i1 %flag, label %call, label %out
diff --git a/llvm/test/CodeGen/ARM/fp16-v3.ll b/llvm/test/CodeGen/ARM/fp16-v3.ll
index 522cb129b5df19..377db1390fccc5 100644
--- a/llvm/test/CodeGen/ARM/fp16-v3.ll
+++ b/llvm/test/CodeGen/ARM/fp16-v3.ll
@@ -19,7 +19,7 @@ target triple = "armv7a--none-eabi"
; CHECK-NEXT: bx lr
define void @test_vec3(ptr %arr, i32 %i) #0 {
%H = sitofp i32 %i to half
- %S = fadd half %H, 0xH4A00
+ %S = fadd half %H, f0x4A00
%1 = insertelement <3 x half> undef, half %S, i32 0
%2 = insertelement <3 x half> %1, half %S, i32 1
%3 = insertelement <3 x half> %2, half %S, i32 2
diff --git a/llvm/test/CodeGen/ARM/pr47454.ll b/llvm/test/CodeGen/ARM/pr47454.ll
index 95f0ac75bd4d28..7447296870fa9b 100644
--- a/llvm/test/CodeGen/ARM/pr47454.ll
+++ b/llvm/test/CodeGen/ARM/pr47454.ll
@@ -25,7 +25,7 @@ define internal fastcc void @main() {
Entry:
; First arg directly from constant
%const = alloca half, align 2
- store half 0xH7C00, ptr %const, align 2
+ store half f0x7C00, ptr %const, align 2
%arg1 = load half, ptr %const, align 2
; Second arg from fucntion return
%arg2 = call fastcc half @getConstant()
diff --git a/llvm/test/CodeGen/ARM/store_half.ll b/llvm/test/CodeGen/ARM/store_half.ll
index 70efbb5d7e060b..37add779415aae 100644
--- a/llvm/test/CodeGen/ARM/store_half.ll
+++ b/llvm/test/CodeGen/ARM/store_half.ll
@@ -4,6 +4,6 @@
; RUN: llc < %s -mtriple=armv8.2a-arm-none-eabi -mattr=+fullfp16 -filetype=obj -o /dev/null
define void @woah(ptr %waythere) {
- store half 0xHE110, ptr %waythere
+ store half f0xE110, ptr %waythere
ret void
}
diff --git a/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll b/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll
index 0415c327d099f6..b8988f312a612a 100644
--- a/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll
+++ b/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll
@@ -171,7 +171,7 @@ define fp128 @test_v2f128_reassoc(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 0xL00000000000000008000000000000000, <2 x fp128> %a)
+ %b = call reassoc fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 f0x80000000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
@@ -194,6 +194,6 @@ define fp128 @test_v2f128_seq(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 0xL00000000000000008000000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 f0x80000000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll b/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll
index fe81324d6679bc..f73473782f4ff4 100644
--- a/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll
+++ b/llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll
@@ -117,7 +117,7 @@ define fp128 @test_v1f128_neutral(<1 x fp128> %a) nounwind {
; CHECK-LABEL: test_v1f128_neutral:
; CHECK: @ %bb.0:
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 0xL00000000000000008000000000000000, <1 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fadd.f128.v1f128(fp128 f0x80000000000000000000000000000000, <1 x fp128> %a)
ret fp128 %b
}
@@ -238,7 +238,7 @@ define fp128 @test_v2f128_neutral(<2 x fp128> %a) nounwind {
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: pop {r4, r5, r11, lr}
; CHECK-NEXT: mov pc, lr
- %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 0xL00000000000000008000000000000000, <2 x fp128> %a)
+ %b = call fp128 @llvm.vector.reduce.fadd.f128.v2f128(fp128 f0x80000000000000000000000000000000, <2 x fp128> %a)
ret fp128 %b
}
diff --git a/llvm/test/CodeGen/DirectX/all.ll b/llvm/test/CodeGen/DirectX/all.ll
index 1c0b6486dc9358..57e10fb3469116 100644
--- a/llvm/test/CodeGen/DirectX/all.ll
+++ b/llvm/test/CodeGen/DirectX/all.ll
@@ -51,7 +51,7 @@ entry:
}
; CHECK-LABEL: all_half
-; CHECK: fcmp une half %{{.*}}, 0xH0000
+; CHECK: fcmp une half %{{.*}}, f0x0000
define noundef i1 @all_half(half noundef %p0) {
entry:
%dx.all = call i1 @llvm.dx.all.f16(half %p0)
diff --git a/llvm/test/CodeGen/DirectX/any.ll b/llvm/test/CodeGen/DirectX/any.ll
index e32aa389a81a55..fb6335b9715778 100644
--- a/llvm/test/CodeGen/DirectX/any.ll
+++ b/llvm/test/CodeGen/DirectX/any.ll
@@ -71,7 +71,7 @@ entry:
}
; CHECK-LABEL: any_half
-; CHECK: fcmp une half %{{.*}}, 0xH0000
+; CHECK: fcmp une half %{{.*}}, f0x0000
define noundef i1 @any_half(half noundef %p0) {
entry:
%p0.addr = alloca half, align 2
diff --git a/llvm/test/CodeGen/DirectX/atan2.ll b/llvm/test/CodeGen/DirectX/atan2.ll
index ee17d2ba777782..de5dd32cab3571 100644
--- a/llvm/test/CodeGen/DirectX/atan2.ll
+++ b/llvm/test/CodeGen/DirectX/atan2.ll
@@ -32,20 +32,20 @@ entry:
; CHECK: [[DIV:%.+]] = fdiv half %y, %x
; EXPCHECK: [[ATAN:%.+]] = call half @llvm.atan.f16(half [[DIV]])
; DOPCHECK: [[ATAN:%.+]] = call half @dx.op.unary.f16(i32 17, half [[DIV]])
-; CHECK-DAG: [[ADD_PI:%.+]] = fadd half [[ATAN]], 0xH4248
-; CHECK-DAG: [[SUB_PI:%.+]] = fsub half [[ATAN]], 0xH4248
-; CHECK-DAG: [[X_LT_0:%.+]] = fcmp olt half %x, 0xH0000
-; CHECK-DAG: [[X_EQ_0:%.+]] = fcmp oeq half %x, 0xH0000
-; CHECK-DAG: [[Y_GE_0:%.+]] = fcmp oge half %y, 0xH0000
-; CHECK-DAG: [[Y_LT_0:%.+]] = fcmp olt half %y, 0xH0000
+; CHECK-DAG: [[ADD_PI:%.+]] = fadd half [[ATAN]], f0x4248
+; CHECK-DAG: [[SUB_PI:%.+]] = fsub half [[ATAN]], f0x4248
+; CHECK-DAG: [[X_LT_0:%.+]] = fcmp olt half %x, f0x0000
+; CHECK-DAG: [[X_EQ_0:%.+]] = fcmp oeq half %x, f0x0000
+; CHECK-DAG: [[Y_GE_0:%.+]] = fcmp oge half %y, f0x0000
+; CHECK-DAG: [[Y_LT_0:%.+]] = fcmp olt half %y, f0x0000
; CHECK: [[XLT0_AND_YGE0:%.+]] = and i1 [[X_LT_0]], [[Y_GE_0]]
; CHECK: [[SELECT_ADD_PI:%.+]] = select i1 [[XLT0_AND_YGE0]], half [[ADD_PI]], half [[ATAN]]
; CHECK: [[XLT0_AND_YLT0:%.+]] = and i1 [[X_LT_0]], [[Y_LT_0]]
; CHECK: [[SELECT_SUB_PI:%.+]] = select i1 [[XLT0_AND_YLT0]], half [[SUB_PI]], half [[SELECT_ADD_PI]]
; CHECK: [[XEQ0_AND_YLT0:%.+]] = and i1 [[X_EQ_0]], [[Y_LT_0]]
-; CHECK: [[SELECT_NEGHPI:%.+]] = select i1 [[XEQ0_AND_YLT0]], half 0xHBE48, half [[SELECT_SUB_PI]]
+; CHECK: [[SELECT_NEGHPI:%.+]] = select i1 [[XEQ0_AND_YLT0]], half f0xBE48, half [[SELECT_SUB_PI]]
; CHECK: [[XEQ0_AND_YGE0:%.+]] = and i1 [[X_EQ_0]], [[Y_GE_0]]
-; CHECK: [[SELECT_HPI:%.+]] = select i1 [[XEQ0_AND_YGE0]], half 0xH3E48, half [[SELECT_NEGHPI]]
+; CHECK: [[SELECT_HPI:%.+]] = select i1 [[XEQ0_AND_YGE0]], half f0x3E48, half [[SELECT_NEGHPI]]
; CHECK: ret half [[SELECT_HPI]]
%elt.atan2 = call half @llvm.atan2.f16(half %y, half %x)
ret half %elt.atan2
diff --git a/llvm/test/CodeGen/DirectX/degrees.ll b/llvm/test/CodeGen/DirectX/degrees.ll
index b38ac13d5f24e2..417367cb75530b 100644
--- a/llvm/test/CodeGen/DirectX/degrees.ll
+++ b/llvm/test/CodeGen/DirectX/degrees.ll
@@ -6,7 +6,7 @@ define noundef half @degrees_half(half noundef %a) {
; CHECK-LABEL: define noundef half @degrees_half(
; CHECK-SAME: half noundef [[A:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[DX_DEGREES1:%.*]] = fmul half [[A]], 0xH5329
+; CHECK-NEXT: [[DX_DEGREES1:%.*]] = fmul half [[A]], f0x5329
; CHECK-NEXT: ret half [[DX_DEGREES1]]
;
entry:
diff --git a/llvm/test/CodeGen/DirectX/exp.ll b/llvm/test/CodeGen/DirectX/exp.ll
index c2d9938d27ecda..e240968089b68a 100644
--- a/llvm/test/CodeGen/DirectX/exp.ll
+++ b/llvm/test/CodeGen/DirectX/exp.ll
@@ -15,7 +15,7 @@ entry:
}
; CHECK-LABEL: exp_half
-; CHECK: fmul half 0xH3DC5, %{{.*}}
+; CHECK: fmul half f0x3DC5, %{{.*}}
; CHECK: call half @dx.op.unary.f16(i32 21, half %{{.*}})
; Function Attrs: noinline nounwind optnone
define noundef half @exp_half(half noundef %a) {
diff --git a/llvm/test/CodeGen/DirectX/log.ll b/llvm/test/CodeGen/DirectX/log.ll
index 195713309cd448..d49182acdcf576 100644
--- a/llvm/test/CodeGen/DirectX/log.ll
+++ b/llvm/test/CodeGen/DirectX/log.ll
@@ -16,7 +16,7 @@ define noundef half @log_half(half noundef %a) #0 {
entry:
; DOPCHECK: call half @dx.op.unary.f16(i32 23, half %{{.*}})
; EXPCHECK: call half @llvm.log2.f16(half %a)
-; CHECK: fmul half 0xH398C, %{{.*}}
+; CHECK: fmul half f0x398C, %{{.*}}
%elt.log = call half @llvm.log.f16(half %a)
ret half %elt.log
}
diff --git a/llvm/test/CodeGen/DirectX/log10.ll b/llvm/test/CodeGen/DirectX/log10.ll
index f3acccce7e451a..06fc0cba57e0c7 100644
--- a/llvm/test/CodeGen/DirectX/log10.ll
+++ b/llvm/test/CodeGen/DirectX/log10.ll
@@ -16,7 +16,7 @@ define noundef half @log10_half(half noundef %a) #0 {
entry:
; DOPCHECK: call half @dx.op.unary.f16(i32 23, half %{{.*}})
; EXPCHECK: call half @llvm.log2.f16(half %a)
-; CHECK: fmul half 0xH34D1, %{{.*}}
+; CHECK: fmul half f0x34D1, %{{.*}}
%elt.log10 = call half @llvm.log10.f16(half %a)
ret half %elt.log10
}
diff --git a/llvm/test/CodeGen/DirectX/radians.ll b/llvm/test/CodeGen/DirectX/radians.ll
index f31585cead3766..b04d4fcbcdbcb7 100644
--- a/llvm/test/CodeGen/DirectX/radians.ll
+++ b/llvm/test/CodeGen/DirectX/radians.ll
@@ -11,7 +11,7 @@ define noundef half @radians_half(half noundef %a) {
; CHECK-LABEL: define noundef half @radians_half(
; CHECK-SAME: half noundef [[A:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = fmul half [[A]], 0xH2478
+; CHECK-NEXT: [[TMP0:%.*]] = fmul half [[A]], f0x2478
; CHECK-NEXT: ret half [[TMP0]]
;
entry:
@@ -36,13 +36,13 @@ define noundef <4 x half> @radians_half_vector(<4 x half> noundef %a) {
; CHECK-SAME: <4 x half> noundef [[A:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK: [[ee0:%.*]] = extractelement <4 x half> [[A]], i64 0
-; CHECK: [[ie0:%.*]] = fmul half [[ee0]], 0xH2478
+; CHECK: [[ie0:%.*]] = fmul half [[ee0]], f0x2478
; CHECK: [[ee1:%.*]] = extractelement <4 x half> [[A]], i64 1
-; CHECK: [[ie1:%.*]] = fmul half [[ee1]], 0xH2478
+; CHECK: [[ie1:%.*]] = fmul half [[ee1]], f0x2478
; CHECK: [[ee2:%.*]] = extractelement <4 x half> [[A]], i64 2
-; CHECK: [[ie2:%.*]] = fmul half [[ee2]], 0xH2478
+; CHECK: [[ie2:%.*]] = fmul half [[ee2]], f0x2478
; CHECK: [[ee3:%.*]] = extractelement <4 x half> [[A]], i64 3
-; CHECK: [[ie3:%.*]] = fmul half [[ee3]], 0xH2478
+; CHECK: [[ie3:%.*]] = fmul half [[ee3]], f0x2478
; CHECK: [[TMP0:%.*]] = insertelement <4 x half> poison, half [[ie0]], i64 0
; CHECK: [[TMP1:%.*]] = insertelement <4 x half> [[TMP0]], half [[ie1]], i64 1
; CHECK: [[TMP2:%.*]] = insertelement <4 x half> [[TMP1]], half [[ie2]], i64 2
diff --git a/llvm/test/CodeGen/DirectX/sign.ll b/llvm/test/CodeGen/DirectX/sign.ll
index 47e51b28d20844..8af69279bebfce 100644
--- a/llvm/test/CodeGen/DirectX/sign.ll
+++ b/llvm/test/CodeGen/DirectX/sign.ll
@@ -6,8 +6,8 @@ define noundef i32 @sign_half(half noundef %a) {
; CHECK-LABEL: define noundef i32 @sign_half(
; CHECK-SAME: half noundef [[A:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = fcmp olt half 0xH0000, [[A]]
-; CHECK-NEXT: [[TMP1:%.*]] = fcmp olt half [[A]], 0xH0000
+; CHECK-NEXT: [[TMP0:%.*]] = fcmp olt half f0x0000, [[A]]
+; CHECK-NEXT: [[TMP1:%.*]] = fcmp olt half [[A]], f0x0000
; CHECK-NEXT: [[TMP2:%.*]] = zext i1 [[TMP0]] to i32
; CHECK-NEXT: [[TMP3:%.*]] = zext i1 [[TMP1]] to i32
; CHECK-NEXT: [[TMP4:%.*]] = sub i32 [[TMP2]], [[TMP3]]
diff --git a/llvm/test/CodeGen/DirectX/step.ll b/llvm/test/CodeGen/DirectX/step.ll
index e89c4e375b2d05..fa12fe6700a152 100644
--- a/llvm/test/CodeGen/DirectX/step.ll
+++ b/llvm/test/CodeGen/DirectX/step.ll
@@ -16,7 +16,7 @@ declare <4 x float> @llvm.dx.step.v4f32(<4 x float>, <4 x float>)
define noundef half @test_step_half(half noundef %p0, half noundef %p1) {
entry:
; CHECK: %0 = fcmp olt half %p1, %p0
- ; CHECK: %1 = select i1 %0, half 0xH0000, half 0xH3C00
+ ; CHECK: %1 = select i1 %0, half f0x0000, half f0x3C00
%hlsl.step = call half @llvm.dx.step.f16(half %p0, half %p1)
ret half %hlsl.step
}
@@ -24,7 +24,7 @@ entry:
define noundef <2 x half> @test_step_half2(<2 x half> noundef %p0, <2 x half> noundef %p1) {
entry:
; CHECK: %0 = fcmp olt <2 x half> %p1, %p0
- ; CHECK: %1 = select <2 x i1> %0, <2 x half> zeroinitializer, <2 x half> splat (half 0xH3C00)
+ ; CHECK: %1 = select <2 x i1> %0, <2 x half> zeroinitializer, <2 x half> splat (half f0x3C00)
%hlsl.step = call <2 x half> @llvm.dx.step.v2f16(<2 x half> %p0, <2 x half> %p1)
ret <2 x half> %hlsl.step
}
@@ -32,7 +32,7 @@ entry:
define noundef <3 x half> @test_step_half3(<3 x half> noundef %p0, <3 x half> noundef %p1) {
entry:
; CHECK: %0 = fcmp olt <3 x half> %p1, %p0
- ; CHECK: %1 = select <3 x i1> %0, <3 x half> zeroinitializer, <3 x half> splat (half 0xH3C00)
+ ; CHECK: %1 = select <3 x i1> %0, <3 x half> zeroinitializer, <3 x half> splat (half f0x3C00)
%hlsl.step = call <3 x half> @llvm.dx.step.v3f16(<3 x half> %p0, <3 x half> %p1)
ret <3 x half> %hlsl.step
}
@@ -40,7 +40,7 @@ entry:
define noundef <4 x half> @test_step_half4(<4 x half> noundef %p0, <4 x half> noundef %p1) {
entry:
; CHECK: %0 = fcmp olt <4 x half> %p1, %p0
- ; CHECK: %1 = select <4 x i1> %0, <4 x half> zeroinitializer, <4 x half> splat (half 0xH3C00)
+ ; CHECK: %1 = select <4 x i1> %0, <4 x half> zeroinitializer, <4 x half> splat (half f0x3C00)
%hlsl.step = call <4 x half> @llvm.dx.step.v4f16(<4 x half> %p0, <4 x half> %p1)
ret <4 x half> %hlsl.step
}
diff --git a/llvm/test/CodeGen/DirectX/vector_reduce_add.ll b/llvm/test/CodeGen/DirectX/vector_reduce_add.ll
index d4ee16a24cb45f..bd3f121cb99a94 100644
--- a/llvm/test/CodeGen/DirectX/vector_reduce_add.ll
+++ b/llvm/test/CodeGen/DirectX/vector_reduce_add.ll
@@ -13,7 +13,7 @@ define noundef half @test_length_half2(<2 x half> noundef %p0) {
; CHECK-NEXT: ret half [[TMP2]]
;
entry:
- %rdx.fadd = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> %p0)
+ %rdx.fadd = call half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> %p0)
ret half %rdx.fadd
}
@@ -22,13 +22,13 @@ define noundef half @test_length_half2_start1(<2 x half> noundef %p0) {
; CHECK-SAME: <2 x half> noundef [[P0:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: [[TMP0:%.*]] = extractelement <2 x half> [[P0]], i64 0
-; CHECK-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH0001
+; CHECK-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], f0x0001
; CHECK-NEXT: [[TMP2:%.*]] = extractelement <2 x half> [[P0]], i64 1
; CHECK-NEXT: [[TMP3:%.*]] = fadd half [[TMP1]], [[TMP2]]
; CHECK-NEXT: ret half [[TMP3]]
;
entry:
- %rdx.fadd = call half @llvm.vector.reduce.fadd.v2f16(half 0xH0001, <2 x half> %p0)
+ %rdx.fadd = call half @llvm.vector.reduce.fadd.v2f16(half f0x0001, <2 x half> %p0)
ret half %rdx.fadd
}
@@ -44,7 +44,7 @@ define noundef half @test_length_half3(<3 x half> noundef %p0) {
; CHECK-NEXT: ret half [[TMP4]]
;
entry:
- %rdx.fadd = call half @llvm.vector.reduce.fadd.v3f16(half 0xH0000, <3 x half> %p0)
+ %rdx.fadd = call half @llvm.vector.reduce.fadd.v3f16(half f0x0000, <3 x half> %p0)
ret half %rdx.fadd
}
@@ -62,7 +62,7 @@ define noundef half @test_length_half4(<4 x half> noundef %p0) {
; CHECK-NEXT: ret half [[TMP6]]
;
entry:
- %rdx.fadd = call half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> %p0)
+ %rdx.fadd = call half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> %p0)
ret half %rdx.fadd
}
diff --git a/llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll b/llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll
index 7d6705843d01bc..2634f5cad79c23 100644
--- a/llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll
+++ b/llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll
@@ -8,7 +8,7 @@ target triple = "hexagon"
define ptr @fred(ptr %v0) local_unnamed_addr #0 {
b0:
%v1 = load <64 x half>, ptr %v0, align 2
- %v2 = insertelement <64 x half> %v1, half 0xH4170, i32 17
+ %v2 = insertelement <64 x half> %v1, half f0x4170, i32 17
store volatile <64 x half> %v2, ptr %v0, align 2
ret ptr %v0
}
diff --git a/llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll b/llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll
index 4c5c96e61b78c9..96abf27118a886 100644
--- a/llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll
+++ b/llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll
@@ -10,7 +10,7 @@ target triple = "hexagon"
; Function Attrs: nofree norecurse nounwind writeonly
define dso_local i32 @foo(ptr nocapture %a) local_unnamed_addr #0 {
vector.body:
- store <40 x half> <half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79, half 0xH3E79>, ptr %a, align 2
+ store <40 x half> <half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79, half f0x3E79>, ptr %a, align 2
ret i32 0
}
diff --git a/llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll b/llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll
index 78ea32cc104031..9ddfe06e1f9fe2 100644
--- a/llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll
+++ b/llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll
@@ -29,10 +29,10 @@ define dso_local i32 @foo(ptr nocapture %0, i32 %1) local_unnamed_addr #0 {
%13 = phi i32 [ 0, %9 ], [ %18, %12 ]
%14 = getelementptr half, ptr %0, i32 %13
%15 = bitcast ptr %14 to ptr
- store <64 x half> <half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170>, ptr %15, align 2
+ store <64 x half> <half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170>, ptr %15, align 2
%16 = getelementptr half, ptr %14, i32 64
%17 = bitcast ptr %16 to ptr
- store <64 x half> <half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170, half 0xH4170>, ptr %17, align 2
+ store <64 x half> <half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170, half f0x4170>, ptr %17, align 2
%18 = add i32 %13, 128
%19 = icmp eq i32 %18, %10
br i1 %19, label %20, label %12
@@ -47,7 +47,7 @@ define dso_local i32 @foo(ptr nocapture %0, i32 %1) local_unnamed_addr #0 {
23: ; preds = %23, %6
%24 = phi ptr [ %28, %23 ], [ %7, %6 ]
%25 = phi i32 [ %26, %23 ], [ %8, %6 ]
- store half 0xH4170, ptr %24, align 2
+ store half f0x4170, ptr %24, align 2
%26 = add nuw nsw i32 %25, 1
%27 = icmp eq i32 %26, %1
%28 = getelementptr half, ptr %24, i32 1
diff --git a/llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll b/llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll
index 18481bdcd12fe3..b6debe9b0205eb 100644
--- a/llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll
+++ b/llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll
@@ -7,7 +7,7 @@ target datalayout = "e-m:e-p:32:32:32-a:0-n16:32-i64:64:64-i32:32:32-i16:16:16-i
target triple = "hexagon"
define dllexport void @fred() #0 {
- tail call void @llvm.masked.store.v64f16.p0(<64 x half> <half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef, half 0xHFBFF, half undef>, ptr undef, i32 64, <64 x i1> <i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false>)
+ tail call void @llvm.masked.store.v64f16.p0(<64 x half> <half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef, half f0xFBFF, half undef>, ptr undef, i32 64, <64 x i1> <i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false>)
ret void
}
diff --git a/llvm/test/CodeGen/LoongArch/vararg.ll b/llvm/test/CodeGen/LoongArch/vararg.ll
index f488610868eb3d..7080b05aa59251 100644
--- a/llvm/test/CodeGen/LoongArch/vararg.ll
+++ b/llvm/test/CodeGen/LoongArch/vararg.ll
@@ -256,7 +256,7 @@ define void @va_aligned_register_caller() nounwind {
; LA64-WITHFP-NEXT: addi.d $sp, $sp, 16
; LA64-WITHFP-NEXT: ret
%1 = call i64 (i64, i128, ...) @va_aligned_register(i64 2, i128 1111,
- fp128 0xLEB851EB851EB851F400091EB851EB851)
+ fp128 f0x400091EB851EB851EB851EB851EB851F)
ret void
}
@@ -348,7 +348,7 @@ define void @va_aligned_stack_caller() nounwind {
; LA64-WITHFP-NEXT: ret
%1 = call i32 (i32, ...) @va_aligned_stack_callee(i32 1, i32 11,
i256 1000, i32 12, i32 13, i128 18446744073709551616, i32 14,
- fp128 0xLEB851EB851EB851F400091EB851EB851, i64 15,
+ fp128 f0x400091EB851EB851EB851EB851EB851F, i64 15,
[2 x i64] [i64 16, i64 17])
ret void
}
diff --git a/llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir b/llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir
index c4e035a4f095cd..7dfdb682c0e9e5 100644
--- a/llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir
+++ b/llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir
@@ -5,8 +5,8 @@
name: bfloat_immediates
body: |
bb.0:
- ; CHECK: %0:_(s16) = G_FCONSTANT bfloat 0xR3E80
- ; CHECK: %1:_(s16) = G_FCONSTANT bfloat 0xR3E80
- %0:_(s16) = G_FCONSTANT bfloat 0xR3E80
+ ; CHECK: %0:_(s16) = G_FCONSTANT bfloat f0x3E80
+ ; CHECK: %1:_(s16) = G_FCONSTANT bfloat f0x3E80
+ %0:_(s16) = G_FCONSTANT bfloat f0x3E80
%1:_(s16) = G_FCONSTANT bfloat 0.25
...
diff --git a/llvm/test/CodeGen/Mips/msa/fexuprl.ll b/llvm/test/CodeGen/Mips/msa/fexuprl.ll
index e25ae39e7e4eb4..f42a74ff9174d1 100644
--- a/llvm/test/CodeGen/Mips/msa/fexuprl.ll
+++ b/llvm/test/CodeGen/Mips/msa/fexuprl.ll
@@ -2,7 +2,7 @@
; Test that fexup[rl].w don't crash LLVM during type legalization.
- at g = local_unnamed_addr global <8 x half> <half 0xH5BF8, half 0xH5BF8, half 0xH5BF8, half 0xH5BF8, half 0xH73C0, half 0xH73C0, half 0xH73C0, half 0xH73C0>, align 16
+ at g = local_unnamed_addr global <8 x half> <half f0x5BF8, half f0x5BF8, half f0x5BF8, half f0x5BF8, half f0x73C0, half f0x73C0, half f0x73C0, half f0x73C0>, align 16
@i = local_unnamed_addr global <4 x float> zeroinitializer, align 16
@j = local_unnamed_addr global <4 x float> zeroinitializer, align 16
diff --git a/llvm/test/CodeGen/NVPTX/bf16-instructions.ll b/llvm/test/CodeGen/NVPTX/bf16-instructions.ll
index 6828bac18cad7f..6c2924f571d99d 100644
--- a/llvm/test/CodeGen/NVPTX/bf16-instructions.ll
+++ b/llvm/test/CodeGen/NVPTX/bf16-instructions.ll
@@ -11,7 +11,7 @@ target triple = "nvptx64-nvidia-cuda"
; LDST: .b8 bfloat_array[8] = {1, 2, 3, 4, 5, 6, 7, 8};
@"bfloat_array" = addrspace(1) constant [4 x bfloat]
- [bfloat 0xR0201, bfloat 0xR0403, bfloat 0xR0605, bfloat 0xR0807]
+ [bfloat f0x0201, bfloat f0x0403, bfloat f0x0605, bfloat f0x0807]
define bfloat @test_fadd(bfloat %0, bfloat %1) {
; SM70-LABEL: test_fadd(
diff --git a/llvm/test/CodeGen/NVPTX/bf16.ll b/llvm/test/CodeGen/NVPTX/bf16.ll
index 98fdbbbdd9c75a..412895bb598d0d 100644
--- a/llvm/test/CodeGen/NVPTX/bf16.ll
+++ b/llvm/test/CodeGen/NVPTX/bf16.ll
@@ -3,7 +3,7 @@
; LDST: .b8 bfloat_array[8] = {1, 2, 3, 4, 5, 6, 7, 8};
@"bfloat_array" = addrspace(1) constant [4 x bfloat]
- [bfloat 0xR0201, bfloat 0xR0403, bfloat 0xR0605, bfloat 0xR0807]
+ [bfloat f0x0201, bfloat f0x0403, bfloat f0x0605, bfloat f0x0807]
define void @test_load_store(ptr addrspace(1) %in, ptr addrspace(1) %out) {
; CHECK-LABEL: @test_load_store
diff --git a/llvm/test/CodeGen/NVPTX/half.ll b/llvm/test/CodeGen/NVPTX/half.ll
index 1b53e246ecd17c..42e65f2dbc9dec 100644
--- a/llvm/test/CodeGen/NVPTX/half.ll
+++ b/llvm/test/CodeGen/NVPTX/half.ll
@@ -3,7 +3,7 @@
; CHECK: .b8 half_array[8] = {1, 2, 3, 4, 5, 6, 7, 8};
@"half_array" = addrspace(1) constant [4 x half]
- [half 0xH0201, half 0xH0403, half 0xH0605, half 0xH0807]
+ [half f0x0201, half f0x0403, half f0x0605, half f0x0807]
define void @test_load_store(ptr addrspace(1) %in, ptr addrspace(1) %out) {
; CHECK-LABEL: @test_load_store
diff --git a/llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll b/llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll
index c9eb6521c11ba0..17113a316d2e2e 100644
--- a/llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll
@@ -16,8 +16,8 @@ entry:
br i1 false, label %bb36, label %bb484
bb36: ; preds = %entry
- %tmp124 = fcmp ord ppc_fp128 %b, 0xM00000000000000000000000000000000 ; <i1> [#uses=1]
- %cmp = fcmp une ppc_fp128 0xM00000000000000000000000000000000, 0xM00000000000000000000000000000000
+ %tmp124 = fcmp ord ppc_fp128 %b, f0x00000000000000000000000000000000 ; <i1> [#uses=1]
+ %cmp = fcmp une ppc_fp128 f0x00000000000000000000000000000000, f0x00000000000000000000000000000000
%tmp140 = and i1 %tmp124, %cmp
unreachable
diff --git a/llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll b/llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll
index 0a0f14d43e3611..210e49c2fee620 100644
--- a/llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll
@@ -8,14 +8,14 @@ target triple = "powerpc64-unknown-linux-gnu"
define hidden i256 @__divtc3(ppc_fp128 %a, ppc_fp128 %b, ppc_fp128 %c, ppc_fp128 %d) nounwind readnone {
entry:
call ppc_fp128 @fabsl( ppc_fp128 %d ) nounwind readnone ; <ppc_fp128>:0 [#uses=1]
- %1 = fcmp olt ppc_fp128 0xM00000000000000000000000000000000, %0 ; <i1>:1 [#uses=1]
- %.pn106 = select i1 %1, ppc_fp128 %a, ppc_fp128 0xM00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
- %.pn = fsub ppc_fp128 0xM00000000000000000000000000000000, %.pn106 ; <ppc_fp128> [#uses=1]
- %y.0 = fdiv ppc_fp128 %.pn, 0xM00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
- %2 = fmul ppc_fp128 %y.0, 0xM3FF00000000000000000000000000000 ; <ppc_fp128>:2 [#uses=1]
- %fmul = fmul ppc_fp128 0xM00000000000000000000000000000000, 0xM00000000000000000000000000000000
+ %1 = fcmp olt ppc_fp128 f0x00000000000000000000000000000000, %0 ; <i1>:1 [#uses=1]
+ %.pn106 = select i1 %1, ppc_fp128 %a, ppc_fp128 f0x00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
+ %.pn = fsub ppc_fp128 f0x00000000000000000000000000000000, %.pn106 ; <ppc_fp128> [#uses=1]
+ %y.0 = fdiv ppc_fp128 %.pn, f0x00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
+ %2 = fmul ppc_fp128 %y.0, f0x00000000000000003FF0000000000000 ; <ppc_fp128>:2 [#uses=1]
+ %fmul = fmul ppc_fp128 f0x00000000000000000000000000000000, f0x00000000000000000000000000000000
%fadd = fadd ppc_fp128 %2, %fmul
- %tmpi = fadd ppc_fp128 %fadd, 0xM00000000000000000000000000000000
+ %tmpi = fadd ppc_fp128 %fadd, f0x00000000000000000000000000000000
store ppc_fp128 %tmpi, ptr null, align 16
ret i256 0
}
diff --git a/llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll b/llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll
index 966fb52089716b..7bc4cf0b440ce2 100644
--- a/llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll
@@ -11,7 +11,7 @@ entry:
br i1 false, label %bb3, label %bb4
bb3: ; preds = %entry
- fsub ppc_fp128 0xM80000000000000000000000000000000, 0xM00000000000000000000000000000000 ; <ppc_fp128>:0 [#uses=1]
+ fsub ppc_fp128 f0x00000000000000008000000000000000, f0x00000000000000000000000000000000 ; <ppc_fp128>:0 [#uses=1]
fptoui ppc_fp128 %0 to i32 ; <i32>:1 [#uses=1]
zext i32 %1 to i64 ; <i64>:2 [#uses=1]
sub i64 0, %2 ; <i64>:3 [#uses=1]
diff --git a/llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll b/llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll
index 0aa05f6c9b395e..e5a79ce7df4876 100644
--- a/llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll
@@ -3,8 +3,8 @@
define void @__divtc3(ptr noalias sret({ ppc_fp128, ppc_fp128 }) %agg.result, ppc_fp128 %a, ppc_fp128 %b, ppc_fp128 %c, ppc_fp128 %d) nounwind {
entry:
%imag59 = load ppc_fp128, ptr null, align 8 ; <ppc_fp128> [#uses=1]
- %0 = fmul ppc_fp128 0xM00000000000000000000000000000000, %imag59 ; <ppc_fp128> [#uses=1]
- %1 = fmul ppc_fp128 0xM00000000000000000000000000000000, 0xM00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
+ %0 = fmul ppc_fp128 f0x00000000000000000000000000000000, %imag59 ; <ppc_fp128> [#uses=1]
+ %1 = fmul ppc_fp128 f0x00000000000000000000000000000000, f0x00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
%2 = fadd ppc_fp128 %0, %1 ; <ppc_fp128> [#uses=1]
store ppc_fp128 %2, ptr null, align 16
unreachable
diff --git a/llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll b/llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll
index e20fc400f80f8f..0f8f2e7330bfe2 100644
--- a/llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll
@@ -306,21 +306,21 @@ define i64 @__fixunstfdi(ppc_fp128 %a) nounwind readnone {
; CHECK-NEXT: mtlr 0
; CHECK-NEXT: blr
entry:
- %0 = fcmp olt ppc_fp128 %a, 0xM00000000000000000000000000000000 ; <i1> [#uses=1]
+ %0 = fcmp olt ppc_fp128 %a, f0x00000000000000000000000000000000 ; <i1> [#uses=1]
br i1 %0, label %bb5, label %bb1
bb1: ; preds = %entry
- %1 = fmul ppc_fp128 %a, 0xM3DF00000000000000000000000000000 ; <ppc_fp128> [#uses=1]
+ %1 = fmul ppc_fp128 %a, f0x00000000000000003DF0000000000000 ; <ppc_fp128> [#uses=1]
%2 = fptoui ppc_fp128 %1 to i32 ; <i32> [#uses=1]
%3 = zext i32 %2 to i64 ; <i64> [#uses=1]
%4 = shl i64 %3, 32 ; <i64> [#uses=3]
%5 = uitofp i64 %4 to ppc_fp128 ; <ppc_fp128> [#uses=1]
%6 = fsub ppc_fp128 %a, %5 ; <ppc_fp128> [#uses=3]
- %7 = fcmp olt ppc_fp128 %6, 0xM00000000000000000000000000000000 ; <i1> [#uses=1]
+ %7 = fcmp olt ppc_fp128 %6, f0x00000000000000000000000000000000 ; <i1> [#uses=1]
br i1 %7, label %bb2, label %bb3
bb2: ; preds = %bb1
- %8 = fsub ppc_fp128 0xM80000000000000000000000000000000, %6 ; <ppc_fp128> [#uses=1]
+ %8 = fsub ppc_fp128 f0x00000000000000008000000000000000, %6 ; <ppc_fp128> [#uses=1]
%9 = fptoui ppc_fp128 %8 to i32 ; <i32> [#uses=1]
%10 = zext i32 %9 to i64 ; <i64> [#uses=1]
%11 = sub i64 %4, %10 ; <i64> [#uses=1]
diff --git a/llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll b/llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll
index a7eb5de4591877..8af99a31ff5fb8 100644
--- a/llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll
+++ b/llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll
@@ -6,8 +6,8 @@ entry:
br i1 false, label %bb6, label %bb21
bb6: ; preds = %entry
- %0 = tail call ppc_fp128 @copysignl(ppc_fp128 0xM00000000000000000000000000000000, ppc_fp128 %a) nounwind readnone ; <ppc_fp128> [#uses=0]
- %iftmp.1.0 = select i1 %.pre139, ppc_fp128 0xM3FF00000000000000000000000000000, ppc_fp128 0xM00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
+ %0 = tail call ppc_fp128 @copysignl(ppc_fp128 f0x00000000000000000000000000000000, ppc_fp128 %a) nounwind readnone ; <ppc_fp128> [#uses=0]
+ %iftmp.1.0 = select i1 %.pre139, ppc_fp128 f0x00000000000000003FF0000000000000, ppc_fp128 f0x00000000000000000000000000000000 ; <ppc_fp128> [#uses=1]
%1 = tail call ppc_fp128 @copysignl(ppc_fp128 %iftmp.1.0, ppc_fp128 %b) nounwind readnone ; <ppc_fp128> [#uses=0]
unreachable
diff --git a/llvm/test/CodeGen/PowerPC/aix-complex.ll b/llvm/test/CodeGen/PowerPC/aix-complex.ll
index f2114540a57e4c..2376e88861eae8 100644
--- a/llvm/test/CodeGen/PowerPC/aix-complex.ll
+++ b/llvm/test/CodeGen/PowerPC/aix-complex.ll
@@ -88,8 +88,8 @@ entry:
%retval = alloca { ppc_fp128, ppc_fp128 }, align 16
%retval.realp = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %retval, i32 0, i32 0
%retval.imagp = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %retval, i32 0, i32 1
- store ppc_fp128 0xM7ffeffffffffffffffffffffffffffff, ptr %retval.realp, align 16
- store ppc_fp128 0xM3ffefffffffffffffffffffffffffffe, ptr %retval.imagp, align 16
+ store ppc_fp128 f0xffffffffffffffff7ffeffffffffffff, ptr %retval.realp, align 16
+ store ppc_fp128 f0xfffffffffffffffe3ffeffffffffffff, ptr %retval.imagp, align 16
%0 = load { ppc_fp128, ppc_fp128 }, ptr %retval, align 16
ret { ppc_fp128, ppc_fp128 } %0
}
diff --git a/llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll b/llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll
index 18ac03445df45d..95000cc57acabe 100644
--- a/llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll
+++ b/llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll
@@ -1,10 +1,10 @@
; RUN: llc -verify-machineinstrs -mcpu=pwr9 -mtriple=powerpc64le-unknown-unknown \
; RUN: -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s | FileCheck %s
- at A = common global fp128 0xL00000000000000000000000000000000, align 16
- at B = common global fp128 0xL00000000000000000000000000000000, align 16
- at C = common global fp128 0xL00000000000000000000000000000000, align 16
- at D = common global fp128 0xL00000000000000000000000000000000, align 16
+ at A = common global fp128 f0x00000000000000000000000000000000, align 16
+ at B = common global fp128 f0x00000000000000000000000000000000, align 16
+ at C = common global fp128 f0x00000000000000000000000000000000, align 16
+ at D = common global fp128 f0x00000000000000000000000000000000, align 16
define fp128 @testSqrtOdd(fp128 %a) {
entry:
@@ -21,15 +21,15 @@ define void @testFMAOdd(fp128 %a, fp128 %b, fp128 %c) {
entry:
%0 = call fp128 @llvm.ppc.fmaf128.round.to.odd(fp128 %a, fp128 %b, fp128 %c)
store fp128 %0, ptr @A, align 16
- %sub = fsub fp128 0xL00000000000000008000000000000000, %c
+ %sub = fsub fp128 f0x80000000000000000000000000000000, %c
%1 = call fp128 @llvm.ppc.fmaf128.round.to.odd(fp128 %a, fp128 %b, fp128 %sub)
store fp128 %1, ptr @B, align 16
%2 = call fp128 @llvm.ppc.fmaf128.round.to.odd(fp128 %a, fp128 %b, fp128 %c)
- %sub1 = fsub fp128 0xL00000000000000008000000000000000, %2
+ %sub1 = fsub fp128 f0x80000000000000000000000000000000, %2
store fp128 %sub1, ptr @C, align 16
- %sub2 = fsub fp128 0xL00000000000000008000000000000000, %c
+ %sub2 = fsub fp128 f0x80000000000000000000000000000000, %c
%3 = call fp128 @llvm.ppc.fmaf128.round.to.odd(fp128 %a, fp128 %b, fp128 %sub2)
- %sub3 = fsub fp128 0xL00000000000000008000000000000000, %3
+ %sub3 = fsub fp128 f0x80000000000000000000000000000000, %3
store fp128 %sub3, ptr @D, align 16
ret void
; CHECK-LABEL: testFMAOdd
diff --git a/llvm/test/CodeGen/PowerPC/bv-widen-undef.ll b/llvm/test/CodeGen/PowerPC/bv-widen-undef.ll
index 76678f4cf611eb..6b66a6987465fe 100644
--- a/llvm/test/CodeGen/PowerPC/bv-widen-undef.ll
+++ b/llvm/test/CodeGen/PowerPC/bv-widen-undef.ll
@@ -14,7 +14,7 @@ CF77: ; preds = %CF81, %CF77, %CF
CF80: ; preds = %CF80, %CF77
%B21 = mul <2 x i8> %Shuff12, <i8 -1, i8 -1>
- %Cmp24 = fcmp une ppc_fp128 0xM00000000000000000000000000000000, 0xM00000000000000000000000000000000
+ %Cmp24 = fcmp une ppc_fp128 f0x00000000000000000000000000000000, f0x00000000000000000000000000000000
br i1 %Cmp24, label %CF80, label %CF81
CF81: ; preds = %CF80
diff --git a/llvm/test/CodeGen/PowerPC/complex-return.ll b/llvm/test/CodeGen/PowerPC/complex-return.ll
index 8ce148172b5578..ce3e2cfce5274a 100644
--- a/llvm/test/CodeGen/PowerPC/complex-return.ll
+++ b/llvm/test/CodeGen/PowerPC/complex-return.ll
@@ -9,8 +9,8 @@ entry:
%x = alloca { ppc_fp128, ppc_fp128 }, align 16
%real = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %x, i32 0, i32 0
%imag = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %x, i32 0, i32 1
- store ppc_fp128 0xM400C0000000000300000000010000000, ptr %real
- store ppc_fp128 0xMC00547AE147AE1483CA47AE147AE147A, ptr %imag
+ store ppc_fp128 f0x0000000010000000400C000000000030, ptr %real
+ store ppc_fp128 f0x3CA47AE147AE147AC00547AE147AE148, ptr %imag
%x.realp = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %x, i32 0, i32 0
%x.real = load ppc_fp128, ptr %x.realp
%x.imagp = getelementptr inbounds { ppc_fp128, ppc_fp128 }, ptr %x, i32 0, i32 1
diff --git a/llvm/test/CodeGen/PowerPC/constant-pool.ll b/llvm/test/CodeGen/PowerPC/constant-pool.ll
index 2ded7215d8fd6a..f0979eaadb33d3 100644
--- a/llvm/test/CodeGen/PowerPC/constant-pool.ll
+++ b/llvm/test/CodeGen/PowerPC/constant-pool.ll
@@ -55,7 +55,7 @@ entry:
; CHECK-P9-NEXT: lfd f2, .LCPI2_1 at toc@l(r3)
; CHECK-P9-NEXT: blr
entry:
- ret ppc_fp128 0xM03600000DBA876CC800D16974FD9D27B
+ ret ppc_fp128 f0x800D16974FD9D27B03600000DBA876CC
}
define fp128 @__Float128ConstantPool() {
@@ -71,7 +71,7 @@ entry:
; CHECK-P9-NEXT: lxv vs34, 0(r3)
; CHECK-P9-NEXT: blr
entry:
- ret fp128 0xL00000000000000003C00FFFFC5D02B3A
+ ret fp128 f0x3C00FFFFC5D02B3A0000000000000000
}
define <16 x i8> @VectorCharConstantPool() {
@@ -343,9 +343,9 @@ define fp128 @three_constants_f128(fp128 %a, fp128 %c) {
; CHECK-P9-NEXT: xsaddqp v2, v2, v3
; CHECK-P9-NEXT: blr
entry:
- %0 = fadd fp128 %a, 0xL8000000000000000400123851EB851EB
- %1 = fadd fp128 %0, 0xL8000000000000000400123851EB991EB
- %2 = fadd fp128 %1, 0xL8000000000000000400123851EB771EB
+ %0 = fadd fp128 %a, f0x400123851EB851EB8000000000000000
+ %1 = fadd fp128 %0, f0x400123851EB991EB8000000000000000
+ %2 = fadd fp128 %1, f0x400123851EB771EB8000000000000000
ret fp128 %2
}
@@ -405,9 +405,9 @@ define ppc_fp128 @three_constants_ppcf128(ppc_fp128 %a, ppc_fp128 %c) {
; CHECK-P9-NEXT: mtlr r0
; CHECK-P9-NEXT: blr
entry:
- %0 = fadd ppc_fp128 %a, 0xM40123851EB851EB80000000000000000
- %1 = fadd ppc_fp128 %0, 0xM4012385199851EB80000000000000000
- %2 = fadd ppc_fp128 %1, 0xM4012385100851EB80000000000000000
+ %0 = fadd ppc_fp128 %a, f0x000000000000000040123851EB851EB8
+ %1 = fadd ppc_fp128 %0, f0x00000000000000004012385199851EB8
+ %2 = fadd ppc_fp128 %1, f0x00000000000000004012385100851EB8
ret ppc_fp128 %2
}
diff --git a/llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll b/llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll
index d6dd9593654011..7e7547e9f7d452 100644
--- a/llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll
+++ b/llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll
@@ -2,7 +2,7 @@
; RUN: llc < %s -verify-machineinstrs -mcpu=pwr9 -mtriple=powerpc64le-unknown-unknown | FileCheck %s -check-prefix=PWR9
; RUN: llc < %s -verify-machineinstrs -mcpu=pwr8 -mtriple=powerpc64le-unknown-unknown | FileCheck %s -check-prefix=PWR8
- at a = internal global fp128 0xL00000000000000000000000000000000, align 16
+ at a = internal global fp128 f0x00000000000000000000000000000000, align 16
@x = internal global [4 x fp128] zeroinitializer, align 16
@y = internal global [4 x fp128] zeroinitializer, align 16
diff --git a/llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll b/llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll
index cd5ea16d4600b7..964a8cd6e4fae1 100644
--- a/llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll
+++ b/llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll
@@ -134,9 +134,9 @@ bb:
br label %bb6
bb6: ; preds = %bb6, %bb
- %i = phi ppc_fp128 [ %i8, %bb6 ], [ 0xM00000000000000000000000000000000, %bb ]
+ %i = phi ppc_fp128 [ %i8, %bb6 ], [ f0x00000000000000000000000000000000, %bb ]
%i7 = phi i64 [ %i9, %bb6 ], [ 0, %bb ]
- %i8 = tail call ppc_fp128 @llvm.fmuladd.ppcf128(ppc_fp128 0xM00000000000000000000000000000000, ppc_fp128 0xM00000000000000000000000000000000, ppc_fp128 %i) #4
+ %i8 = tail call ppc_fp128 @llvm.fmuladd.ppcf128(ppc_fp128 f0x00000000000000000000000000000000, ppc_fp128 f0x00000000000000000000000000000000, ppc_fp128 %i) #4
%i9 = add i64 %i7, -4
%i10 = icmp eq i64 %i9, 0
br i1 %i10, label %bb14, label %bb6
diff --git a/llvm/test/CodeGen/PowerPC/f128-aggregates.ll b/llvm/test/CodeGen/PowerPC/f128-aggregates.ll
index 4be855e30ea1d4..98de3413ce74d1 100644
--- a/llvm/test/CodeGen/PowerPC/f128-aggregates.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-aggregates.ll
@@ -643,7 +643,7 @@ if.end: ; preds = %entry
%argp.cur = load ptr, ptr %ap, align 8
%argp.next = getelementptr inbounds i8, ptr %argp.cur, i64 16
%0 = load fp128, ptr %argp.cur, align 8
- %add = fadd fp128 %0, 0xL00000000000000000000000000000000
+ %add = fadd fp128 %0, f0x00000000000000000000000000000000
%argp.next3 = getelementptr inbounds i8, ptr %argp.cur, i64 32
store ptr %argp.next3, ptr %ap, align 8
%1 = load fp128, ptr %argp.next, align 8
@@ -652,7 +652,7 @@ if.end: ; preds = %entry
br label %cleanup
cleanup: ; preds = %entry, %if.end
- %retval.0 = phi fp128 [ %add4, %if.end ], [ 0xL00000000000000000000000000000000, %entry ]
+ %retval.0 = phi fp128 [ %add4, %if.end ], [ f0x00000000000000000000000000000000, %entry ]
call void @llvm.lifetime.end.p0(i64 8, ptr nonnull %ap) #2
ret fp128 %retval.0
}
diff --git a/llvm/test/CodeGen/PowerPC/f128-arith.ll b/llvm/test/CodeGen/PowerPC/f128-arith.ll
index decc4a38f7ccd4..8314060db52de9 100644
--- a/llvm/test/CodeGen/PowerPC/f128-arith.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-arith.ll
@@ -310,7 +310,7 @@ define dso_local void @qpNAbs(ptr nocapture readonly %a, ptr nocapture %res) {
entry:
%0 = load fp128, ptr %a, align 16
%1 = tail call fp128 @llvm.fabs.f128(fp128 %0)
- %neg = fsub fp128 0xL00000000000000008000000000000000, %1
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %1
store fp128 %neg, ptr %res, align 16
ret void
@@ -337,7 +337,7 @@ define dso_local void @qpNeg(ptr nocapture readonly %a, ptr nocapture %res) {
; CHECK-P8-NEXT: blr
entry:
%0 = load fp128, ptr %a, align 16
- %sub = fsub fp128 0xL00000000000000008000000000000000, %0
+ %sub = fsub fp128 f0x80000000000000000000000000000000, %0
store fp128 %sub, ptr %res, align 16
ret void
@@ -846,8 +846,8 @@ entry:
}
declare fp128 @llvm.powi.f128.i32(fp128 %Val, i32 %power)
- at a = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
- at b = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
+ at a = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
+ at b = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
define fp128 @qp_frem() #0 {
; CHECK-LABEL: qp_frem:
diff --git a/llvm/test/CodeGen/PowerPC/f128-compare.ll b/llvm/test/CodeGen/PowerPC/f128-compare.ll
index a03049a9945dc5..0322ff6d8231f1 100644
--- a/llvm/test/CodeGen/PowerPC/f128-compare.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-compare.ll
@@ -5,8 +5,8 @@
; RUN: -ppc-asm-full-reg-names -ppc-vsr-nums-as-vr < %s | FileCheck %s \
; RUN: -check-prefix=CHECK-P8
- at a_qp = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
- at b_qp = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
+ at a_qp = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
+ at b_qp = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
; Function Attrs: noinline nounwind optnone
define dso_local signext i32 @greater_qp() {
diff --git a/llvm/test/CodeGen/PowerPC/f128-conv.ll b/llvm/test/CodeGen/PowerPC/f128-conv.ll
index d8eed1fb4092ce..cdd02a1b94873c 100644
--- a/llvm/test/CodeGen/PowerPC/f128-conv.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-conv.ll
@@ -1114,11 +1114,11 @@ entry:
; Convert QP to DP
@f128Array = global [4 x fp128]
- [fp128 0xL00000000000000004004C00000000000,
- fp128 0xLF000000000000000400808AB851EB851,
- fp128 0xL5000000000000000400E0C26324C8366,
- fp128 0xL8000000000000000400A24E2E147AE14], align 16
- at f128global = global fp128 0xL300000000000000040089CA8F5C28F5C, align 16
+ [fp128 f0x4004C000000000000000000000000000,
+ fp128 f0x400808AB851EB851F000000000000000,
+ fp128 f0x400E0C26324C83665000000000000000,
+ fp128 f0x400A24E2E147AE148000000000000000], align 16
+ at f128global = global fp128 f0x40089CA8F5C28F5C3000000000000000, align 16
; Function Attrs: norecurse nounwind readonly
define double @qpConv2dp(ptr nocapture readonly %a) {
@@ -1450,7 +1450,7 @@ entry:
ret void
}
- at f128Glob = common global fp128 0xL00000000000000000000000000000000, align 16
+ at f128Glob = common global fp128 f0x00000000000000000000000000000000, align 16
; Function Attrs: norecurse nounwind readnone
define fp128 @dpConv2qp(double %a) {
diff --git a/llvm/test/CodeGen/PowerPC/f128-fma.ll b/llvm/test/CodeGen/PowerPC/f128-fma.ll
index d55697422c7eba..247f38230fddb9 100644
--- a/llvm/test/CodeGen/PowerPC/f128-fma.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-fma.ll
@@ -229,7 +229,7 @@ entry:
%2 = load fp128, ptr %c, align 16
%mul = fmul contract fp128 %1, %2
%add = fadd contract fp128 %0, %mul
- %sub = fsub fp128 0xL00000000000000008000000000000000, %add
+ %sub = fsub fp128 f0x80000000000000000000000000000000, %add
store fp128 %sub, ptr %res, align 16
ret void
}
@@ -290,7 +290,7 @@ entry:
%mul = fmul contract fp128 %0, %1
%2 = load fp128, ptr %c, align 16
%add = fadd contract fp128 %mul, %2
- %sub = fsub fp128 0xL00000000000000008000000000000000, %add
+ %sub = fsub fp128 f0x80000000000000000000000000000000, %add
store fp128 %sub, ptr %res, align 16
ret void
}
@@ -466,7 +466,7 @@ entry:
%2 = load fp128, ptr %c, align 16
%mul = fmul contract fp128 %1, %2
%sub = fsub contract fp128 %0, %mul
- %sub1 = fsub fp128 0xL00000000000000008000000000000000, %sub
+ %sub1 = fsub fp128 f0x80000000000000000000000000000000, %sub
store fp128 %sub1, ptr %res, align 16
ret void
}
@@ -527,7 +527,7 @@ entry:
%mul = fmul contract fp128 %0, %1
%2 = load fp128, ptr %c, align 16
%sub = fsub contract fp128 %mul, %2
- %sub1 = fsub fp128 0xL00000000000000008000000000000000, %sub
+ %sub1 = fsub fp128 f0x80000000000000000000000000000000, %sub
store fp128 %sub1, ptr %res, align 16
ret void
}
diff --git a/llvm/test/CodeGen/PowerPC/f128-passByValue.ll b/llvm/test/CodeGen/PowerPC/f128-passByValue.ll
index 1572cc082af3ea..6b8d75c5e108ea 100644
--- a/llvm/test/CodeGen/PowerPC/f128-passByValue.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-passByValue.ll
@@ -22,7 +22,7 @@ define fp128 @loadConstant() {
; CHECK-P8-NEXT: xxswapd v2, vs0
; CHECK-P8-NEXT: blr
entry:
- ret fp128 0xL00000000000000004001400000000000
+ ret fp128 f0x40014000000000000000000000000000
}
; Function Attrs: norecurse nounwind readnone
@@ -57,7 +57,7 @@ define fp128 @loadConstant2(fp128 %a, fp128 %b) {
; CHECK-P8-NEXT: blr
entry:
%add = fadd fp128 %a, %b
- %add1 = fadd fp128 %add, 0xL00000000000000004001400000000000
+ %add1 = fadd fp128 %add, f0x40014000000000000000000000000000
ret fp128 %add1
}
diff --git a/llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll b/llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll
index ca8911e434e4a6..35886016eda7d8 100644
--- a/llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll
+++ b/llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll
@@ -6,10 +6,10 @@
; RUN: -verify-machineinstrs -ppc-vsr-nums-as-vr -ppc-asm-full-reg-names < %s \
; RUN: | FileCheck %s -check-prefix=CHECK-P8
- at f128Array = global [4 x fp128] [fp128 0xL00000000000000004004C00000000000,
- fp128 0xLF000000000000000400808AB851EB851,
- fp128 0xL5000000000000000400E0C26324C8366,
- fp128 0xL8000000000000000400A24E2E147AE14],
+ at f128Array = global [4 x fp128] [fp128 f0x4004C000000000000000000000000000,
+ fp128 f0x400808AB851EB851F000000000000000,
+ fp128 f0x400E0C26324C83665000000000000000,
+ fp128 f0x400A24E2E147AE148000000000000000],
align 16
; Function Attrs: norecurse nounwind readonly
diff --git a/llvm/test/CodeGen/PowerPC/float-asmprint.ll b/llvm/test/CodeGen/PowerPC/float-asmprint.ll
index bdbca29369c4df..a3865163a92f8b 100644
--- a/llvm/test/CodeGen/PowerPC/float-asmprint.ll
+++ b/llvm/test/CodeGen/PowerPC/float-asmprint.ll
@@ -4,8 +4,8 @@
; on a big-endian target. x86_fp80 can't actually print for unrelated reasons,
; but that's not really a problem.
- at var128 = global fp128 0xL00000000000000008000000000000000, align 16
- at varppc128 = global ppc_fp128 0xM80000000000000000000000000000000, align 16
+ at var128 = global fp128 f0x80000000000000000000000000000000, align 16
+ at varppc128 = global ppc_fp128 f0x00000000000000008000000000000000, align 16
@var64 = global double -0.0, align 8
@var32 = global float -0.0, align 4
@var16 = global half -0.0, align 2
diff --git a/llvm/test/CodeGen/PowerPC/float-load-store-pair.ll b/llvm/test/CodeGen/PowerPC/float-load-store-pair.ll
index a22a1cbef8e52b..dd7203d95d1840 100644
--- a/llvm/test/CodeGen/PowerPC/float-load-store-pair.ll
+++ b/llvm/test/CodeGen/PowerPC/float-load-store-pair.ll
@@ -20,8 +20,8 @@
@a13 = dso_local local_unnamed_addr global double 0.000000e+00, align 8
@a14 = dso_local local_unnamed_addr global double 0.000000e+00, align 8
@a15 = dso_local local_unnamed_addr global double 0.000000e+00, align 8
- at a16 = dso_local local_unnamed_addr global ppc_fp128 0xM00000000000000000000000000000000, align 16
- at a17 = dso_local local_unnamed_addr global fp128 0xL00000000000000000000000000000000, align 16
+ at a16 = dso_local local_unnamed_addr global ppc_fp128 f0x00000000000000000000000000000000, align 16
+ at a17 = dso_local local_unnamed_addr global fp128 f0x00000000000000000000000000000000, align 16
; Because this test function is trying to pass float argument by stack,
; so the fpr is only used to load/store float argument
diff --git a/llvm/test/CodeGen/PowerPC/fminnum.ll b/llvm/test/CodeGen/PowerPC/fminnum.ll
index d2b9e2b421e31d..b8a67f8a80b932 100644
--- a/llvm/test/CodeGen/PowerPC/fminnum.ll
+++ b/llvm/test/CodeGen/PowerPC/fminnum.ll
@@ -440,7 +440,7 @@ define ppc_fp128 @fminnum_const(ppc_fp128 %0) {
; CHECK-NEXT: addi 1, 1, 96
; CHECK-NEXT: mtlr 0
; CHECK-NEXT: blr
- %2 = tail call fast ppc_fp128 @llvm.minnum.ppcf128(ppc_fp128 %0, ppc_fp128 0xM3FF00000000000000000000000000000)
+ %2 = tail call fast ppc_fp128 @llvm.minnum.ppcf128(ppc_fp128 %0, ppc_fp128 f0x00000000000000003FF0000000000000)
ret ppc_fp128 %2
}
diff --git a/llvm/test/CodeGen/PowerPC/fp-classify.ll b/llvm/test/CodeGen/PowerPC/fp-classify.ll
index dc9853ff2e3014..7c04f046f7c7d9 100644
--- a/llvm/test/CodeGen/PowerPC/fp-classify.ll
+++ b/llvm/test/CodeGen/PowerPC/fp-classify.ll
@@ -80,7 +80,7 @@ define zeroext i1 @abs_isinfq(fp128 %x) {
; P9-NEXT: blr
entry:
%0 = tail call fp128 @llvm.fabs.f128(fp128 %x)
- %cmpinf = fcmp oeq fp128 %0, 0xL00000000000000007FFF000000000000
+ %cmpinf = fcmp oeq fp128 %0, f0x7FFF0000000000000000000000000000
ret i1 %cmpinf
}
@@ -162,7 +162,7 @@ define zeroext i1 @abs_isinfornanq(fp128 %x) {
; P9-NEXT: blr
entry:
%0 = tail call fp128 @llvm.fabs.f128(fp128 %x)
- %cmpinf = fcmp ueq fp128 %0, 0xL00000000000000007FFF000000000000
+ %cmpinf = fcmp ueq fp128 %0, f0x7FFF0000000000000000000000000000
ret i1 %cmpinf
}
@@ -292,7 +292,7 @@ define zeroext i1 @iszeroq(fp128 %x) {
; P9-NEXT: iseleq 3, 4, 3
; P9-NEXT: blr
entry:
- %cmp = fcmp oeq fp128 %x, 0xL00000000000000000000000000000000
+ %cmp = fcmp oeq fp128 %x, f0x00000000000000000000000000000000
ret i1 %cmp
}
diff --git a/llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll b/llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll
index ebec8c1c4d6543..01e32cbf0eda96 100644
--- a/llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll
+++ b/llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll
@@ -83,7 +83,7 @@ define i128 @test_neg(ppc_fp128 %x) nounwind {
; PPC32-NEXT: addi 1, 1, 32
; PPC32-NEXT: blr
entry:
- %0 = fsub ppc_fp128 0xM80000000000000000000000000000000, %x
+ %0 = fsub ppc_fp128 f0x00000000000000008000000000000000, %x
%1 = bitcast ppc_fp128 %0 to i128
ret i128 %1
}
@@ -229,7 +229,7 @@ define i128 @test_copysign_const(ppc_fp128 %x) nounwind {
; PPC32-NEXT: addi 1, 1, 32
; PPC32-NEXT: blr
entry:
- %0 = tail call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 0xM400F000000000000BCB0000000000000, ppc_fp128 %x)
+ %0 = tail call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 f0xBCB0000000000000400F000000000000, ppc_fp128 %x)
%1 = bitcast ppc_fp128 %0 to i128
ret i128 %1
}
diff --git a/llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll b/llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll
index 14fea561831985..1b50a69bcd2e7a 100644
--- a/llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll
+++ b/llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll
@@ -16,9 +16,9 @@
@_ZL19StaticSignedLongVar = internal unnamed_addr global i64 0, align 8
@_ZL14StaticFloatVar = internal unnamed_addr global float 0.000000e+00, align 4
@_ZL15StaticDoubleVar = internal unnamed_addr global double 0.000000e+00, align 8
- at _ZL19StaticLongDoubleVar = internal unnamed_addr global ppc_fp128 0xM00000000000000000000000000000000, align 16
+ at _ZL19StaticLongDoubleVar = internal unnamed_addr global ppc_fp128 f0x00000000000000000000000000000000, align 16
@_ZL23StaticSigned__Int128Var = internal unnamed_addr global i128 0, align 16
- at _ZL19Static__Float128Var = internal unnamed_addr global fp128 0xL00000000000000000000000000000000, align 16
+ at _ZL19Static__Float128Var = internal unnamed_addr global fp128 f0x00000000000000000000000000000000, align 16
@_ZL25StaticVectorSignedCharVar = internal unnamed_addr global <16 x i8> zeroinitializer, align 16
@_ZL26StaticVectorSignedShortVar = internal unnamed_addr global <8 x i16> zeroinitializer, align 16
@_ZL24StaticVectorSignedIntVar = internal unnamed_addr global <4 x i32> zeroinitializer, align 16
diff --git a/llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll b/llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll
index 42569333002437..e54e720f998bd2 100644
--- a/llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll
+++ b/llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll
@@ -1274,7 +1274,7 @@ define half @PR40273(half) #0 {
; SOFT-NEXT: ld r0, 16(r1)
; SOFT-NEXT: mtlr r0
; SOFT-NEXT: blr
- %2 = fcmp une half %0, 0xH0000
+ %2 = fcmp une half %0, f0x0000
%3 = uitofp i1 %2 to half
ret half %3
}
diff --git a/llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll b/llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll
index 7fec7e039eca78..1c62671d5203d9 100644
--- a/llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll
+++ b/llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll
@@ -1,6 +1,6 @@
; RUN: llc -verify-machineinstrs -O2 -mtriple=powerpc-unknown-linux-gnu < %s | FileCheck %s
- at x = global ppc_fp128 0xM405EDA5E353F7CEE0000000000000000, align 16
+ at x = global ppc_fp128 f0x0000000000000000405EDA5E353F7CEE, align 16
@.str = private unnamed_addr constant [5 x i8] c"%Lf\0A\00", align 1
diff --git a/llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll b/llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll
index a272f9e73a04d7..56504ac4661444 100644
--- a/llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll
+++ b/llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll
@@ -7,7 +7,7 @@ target triple = "powerpc-buildroot-linux-gnu"
define i32 @main() #0 {
entry:
- %call = tail call i32 (ptr, ...) @printf(ptr @.str, ppc_fp128 0xM3FF00000000000000000000000000000)
+ %call = tail call i32 (ptr, ...) @printf(ptr @.str, ppc_fp128 f0x00000000000000003FF0000000000000)
ret i32 0
}
diff --git a/llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll b/llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll
index 3c3044762ea9df..4df4447dbe408e 100644
--- a/llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll
+++ b/llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll
@@ -3,7 +3,7 @@
target datalayout = "E-m:e-p:32:32-i64:64-n32"
target triple = "powerpc-buildroot-linux-gnu"
- at x = global ppc_fp128 0xM3FF00000000000000000000000000000, align 16
+ at x = global ppc_fp128 f0x00000000000000003FF0000000000000, align 16
@.str = private unnamed_addr constant [9 x i8] c"%Lf %Lf\0A\00", align 1
define void @foo() #0 {
diff --git a/llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll b/llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll
index 3966f85cc86625..08c0ea45d35e32 100644
--- a/llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll
+++ b/llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll
@@ -1,12 +1,12 @@
; RUN: llvm-as < %s -o - | llvm-dis - | FileCheck %s
;CHECK-LABEL: main
-;CHECK: store ppc_fp128 0xM0000000000000000FFFFFFFFFFFFFFFF
+;CHECK: store ppc_fp128 f0xFFFFFFFFFFFFFFFF0000000000000000
define i32 @main() local_unnamed_addr {
_main_entry:
%e3 = alloca ppc_fp128, align 16
- store ppc_fp128 0xM0000000000000000FFFFFFFFFFFFFFFF, ptr %e3, align 16
+ store ppc_fp128 f0xFFFFFFFFFFFFFFFF0000000000000000, ptr %e3, align 16
%0 = call i64 @foo( ptr nonnull %e3)
ret i32 undef
}
diff --git a/llvm/test/CodeGen/PowerPC/ppcf128-2.ll b/llvm/test/CodeGen/PowerPC/ppcf128-2.ll
index 66eb4548c170e8..973b4f47c270cc 100644
--- a/llvm/test/CodeGen/PowerPC/ppcf128-2.ll
+++ b/llvm/test/CodeGen/PowerPC/ppcf128-2.ll
@@ -4,7 +4,7 @@ define i64 @__fixtfdi(ppc_fp128 %a) nounwind {
entry:
br i1 false, label %bb, label %bb8
bb: ; preds = %entry
- %tmp5 = fsub ppc_fp128 0xM80000000000000000000000000000000, %a ; <ppc_fp128> [#uses=1]
+ %tmp5 = fsub ppc_fp128 f0x00000000000000008000000000000000, %a ; <ppc_fp128> [#uses=1]
%tmp6 = tail call i64 @__fixunstfdi( ppc_fp128 %tmp5 ) nounwind ; <i64> [#uses=0]
ret i64 0
bb8: ; preds = %entry
diff --git a/llvm/test/CodeGen/PowerPC/ppcf128-4.ll b/llvm/test/CodeGen/PowerPC/ppcf128-4.ll
index 67fcf46147fe33..25986b66e9d3ab 100644
--- a/llvm/test/CodeGen/PowerPC/ppcf128-4.ll
+++ b/llvm/test/CodeGen/PowerPC/ppcf128-4.ll
@@ -2,7 +2,7 @@
define ppc_fp128 @__floatditf(i64 %u) nounwind {
entry:
- %tmp6 = fmul ppc_fp128 0xM00000000000000000000000000000000, 0xM41F00000000000000000000000000000
+ %tmp6 = fmul ppc_fp128 f0x00000000000000000000000000000000, f0x000000000000000041F0000000000000
%tmp78 = trunc i64 %u to i32
%tmp789 = uitofp i32 %tmp78 to ppc_fp128
%tmp11 = fadd ppc_fp128 %tmp789, %tmp6
diff --git a/llvm/test/CodeGen/PowerPC/ppcf128-endian.ll b/llvm/test/CodeGen/PowerPC/ppcf128-endian.ll
index 309139c5bcf0c1..f92e90d808a96f 100644
--- a/llvm/test/CodeGen/PowerPC/ppcf128-endian.ll
+++ b/llvm/test/CodeGen/PowerPC/ppcf128-endian.ll
@@ -4,7 +4,7 @@
target datalayout = "e-m:e-i64:64-n32:64"
target triple = "powerpc64le-unknown-linux-gnu"
- at g = common global ppc_fp128 0xM00000000000000000000000000000000, align 16
+ at g = common global ppc_fp128 f0x00000000000000000000000000000000, align 16
define void @callee(ppc_fp128 %x) {
; CHECK-LABEL: callee:
@@ -69,7 +69,7 @@ define void @caller_const() {
; CHECK-NEXT: mtlr 0
; CHECK-NEXT: blr
entry:
- call void @test(ppc_fp128 0xM3FF00000000000000000000000000000)
+ call void @test(ppc_fp128 f0x00000000000000003FF0000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir b/llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir
index 474c288bba88bf..d027375b081351 100644
--- a/llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir
+++ b/llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir
@@ -4,7 +4,7 @@
--- |
define ppc_fp128 @freeze_select(ppc_fp128 %a, ppc_fp128 %b) {
%sel.frozen = freeze ppc_fp128 %a
- %cmp = fcmp one ppc_fp128 %sel.frozen, 0xM00000000000000000000000000000000
+ %cmp = fcmp one ppc_fp128 %sel.frozen, f0x00000000000000000000000000000000
br i1 %cmp, label %select.end, label %select.false
select.false: ; preds = %0
diff --git a/llvm/test/CodeGen/PowerPC/ppcf128sf.ll b/llvm/test/CodeGen/PowerPC/ppcf128sf.ll
index e9e718c8632657..560136dc17fdf9 100644
--- a/llvm/test/CodeGen/PowerPC/ppcf128sf.ll
+++ b/llvm/test/CodeGen/PowerPC/ppcf128sf.ll
@@ -1,7 +1,7 @@
; RUN: llc -verify-machineinstrs -mtriple=powerpc-unknown-linux-gnu -O0 < %s | FileCheck %s
- at ld = common global ppc_fp128 0xM00000000000000000000000000000000, align 16
- at ld2 = common global ppc_fp128 0xM00000000000000000000000000000000, align 16
+ at ld = common global ppc_fp128 f0x00000000000000000000000000000000, align 16
+ at ld2 = common global ppc_fp128 f0x00000000000000000000000000000000, align 16
@d = common global double 0.000000e+00, align 8
@f = common global float 0.000000e+00, align 4
@i = common global i32 0, align 4
diff --git a/llvm/test/CodeGen/PowerPC/pr15632.ll b/llvm/test/CodeGen/PowerPC/pr15632.ll
index d0b29e238ac71b..86be31de01deed 100644
--- a/llvm/test/CodeGen/PowerPC/pr15632.ll
+++ b/llvm/test/CodeGen/PowerPC/pr15632.ll
@@ -3,13 +3,13 @@
target datalayout = "E-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v128:128:128-n32:64"
target triple = "powerpc64-unknown-linux-gnu"
- at ld2 = common global ppc_fp128 0xM00000000000000000000000000000000, align 16
+ at ld2 = common global ppc_fp128 f0x00000000000000000000000000000000, align 16
declare void @other(ppc_fp128 %tmp70)
define void @bug() {
entry:
%x = load ppc_fp128, ptr @ld2, align 16
- %tmp70 = frem ppc_fp128 0xM00000000000000000000000000000000, %x
+ %tmp70 = frem ppc_fp128 f0x00000000000000000000000000000000, %x
call void @other(ppc_fp128 %tmp70)
unreachable
}
diff --git a/llvm/test/CodeGen/PowerPC/pr16556-2.ll b/llvm/test/CodeGen/PowerPC/pr16556-2.ll
index 8f871cafcf6388..b065125bdae93f 100644
--- a/llvm/test/CodeGen/PowerPC/pr16556-2.ll
+++ b/llvm/test/CodeGen/PowerPC/pr16556-2.ll
@@ -32,7 +32,7 @@ noassert: ; preds = %entry
%tmp4 = sitofp i64 %tmp3 to ppc_fp128
%tmp5 = load i64, ptr @_D4core4time12TickDuration11ticksPerSecyl
%tmp6 = sitofp i64 %tmp5 to ppc_fp128
- %tmp7 = fdiv ppc_fp128 %tmp6, 0xM80000000000000000000000000000000
+ %tmp7 = fdiv ppc_fp128 %tmp6, f0x00000000000000008000000000000000
%tmp8 = fdiv ppc_fp128 %tmp4, %tmp7
%tmp9 = fptosi ppc_fp128 %tmp8 to i64
ret i64 %tmp9
diff --git a/llvm/test/CodeGen/PowerPC/pr16573.ll b/llvm/test/CodeGen/PowerPC/pr16573.ll
index a5538fc8b8ea31..8a59c839a0250a 100644
--- a/llvm/test/CodeGen/PowerPC/pr16573.ll
+++ b/llvm/test/CodeGen/PowerPC/pr16573.ll
@@ -3,7 +3,7 @@
target triple = "powerpc64-unknown-linux-gnu"
define double @test() {
- %1 = fptrunc ppc_fp128 0xM818F2887B9295809800000000032D000 to double
+ %1 = fptrunc ppc_fp128 f0x800000000032D000818F2887B9295809 to double
ret double %1
}
diff --git a/llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll b/llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll
index 7fac56fe04712d..9ee682cad15e48 100644
--- a/llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll
+++ b/llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll
@@ -83,7 +83,7 @@ define signext i32 @t3(ppc_fp128 %x) local_unnamed_addr #0 {
; CHECK-NVSXALT-NEXT: isel 3, 4, 3, 20
; CHECK-NVSXALT-NEXT: blr
entry:
- %cmp = fcmp ogt ppc_fp128 %x, 0xM00000000000000000000000000000000
+ %cmp = fcmp ogt ppc_fp128 %x, f0x00000000000000000000000000000000
%tmp = select i1 %cmp, i32 43, i32 11
ret i32 %tmp
diff --git a/llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll b/llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll
index 10fba54c2d6a43..2de5a540d8cf78 100644
--- a/llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll
+++ b/llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll
@@ -340,7 +340,7 @@ if.end: ; preds = %if.then, %entry
call void @llvm.memcpy.p0.p0.i64(ptr align 16 %agg.tmp117, ptr align 16 @s1998, i64 5168, i1 false)
call void @llvm.memcpy.p0.p0.i64(ptr align 16 %agg.tmp118, ptr align 16 getelementptr inbounds ([5 x %struct.S1998], ptr @a1998, i32 0, i64 2), i64 5168, i1 false)
call void @llvm.memcpy.p0.p0.i64(ptr align 16 %agg.tmp119, ptr align 16 @s1998, i64 5168, i1 false)
- call void (i32, ...) @check1998va(i32 signext 2, ptr byval(%struct.S1998) align 16 %agg.tmp116, ptr byval(%struct.S1998) align 16 %agg.tmp117, ppc_fp128 0xM40000000000000000000000000000000, ptr byval(%struct.S1998) align 16 %agg.tmp118, ptr byval(%struct.S1998) align 16 %agg.tmp119)
+ call void (i32, ...) @check1998va(i32 signext 2, ptr byval(%struct.S1998) align 16 %agg.tmp116, ptr byval(%struct.S1998) align 16 %agg.tmp117, ppc_fp128 f0x00000000000000004000000000000000, ptr byval(%struct.S1998) align 16 %agg.tmp118, ptr byval(%struct.S1998) align 16 %agg.tmp119)
ret void
}
diff --git a/llvm/test/CodeGen/PowerPC/rs-undef-use.ll b/llvm/test/CodeGen/PowerPC/rs-undef-use.ll
index 0fccc5469f3a56..b078cfbb600d7d 100644
--- a/llvm/test/CodeGen/PowerPC/rs-undef-use.ll
+++ b/llvm/test/CodeGen/PowerPC/rs-undef-use.ll
@@ -32,7 +32,7 @@ CF84: ; preds = %CF84, %CF84.critedg
CF85: ; preds = %CF84
%L47 = load i64, ptr %A3
store i64 %E18, ptr %A3
- store ppc_fp128 0xM4D436562A0416DE00000000000000000, ptr %A2
+ store ppc_fp128 f0x00000000000000004D436562A0416DE0, ptr %A2
%Cmp61 = icmp slt i64 %L47, %L40
br i1 %Cmp61, label %CF, label %CF77
diff --git a/llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll b/llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll
index ca9bacebe7a33a..5e736f859ac186 100644
--- a/llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll
+++ b/llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll
@@ -83,7 +83,7 @@ define fp128 @olt_sel(fp128 %a, fp128 %b) {
; CHECK-NEXT: vmr v2, v3
; CHECK-NEXT: blr
entry:
- %0 = fcmp fast olt fp128 %a, 0xL00000000000000000000000000000000
- %1 = select i1 %0, fp128 %b, fp128 0xL00000000000000000000000000000000
+ %0 = fcmp fast olt fp128 %a, f0x00000000000000000000000000000000
+ %1 = select i1 %0, fp128 %b, fp128 f0x00000000000000000000000000000000
ret fp128 %1
}
diff --git a/llvm/test/CodeGen/PowerPC/std-unal-fi.ll b/llvm/test/CodeGen/PowerPC/std-unal-fi.ll
index b488ddc1235415..64f5f948d0afdb 100644
--- a/llvm/test/CodeGen/PowerPC/std-unal-fi.ll
+++ b/llvm/test/CodeGen/PowerPC/std-unal-fi.ll
@@ -29,7 +29,7 @@ CF83: ; preds = %CF82
CF81: ; preds = %CF83
%Shuff43 = shufflevector <16 x i32> %Shuff7, <16 x i32> undef, <16 x i32> <i32 15, i32 17, i32 19, i32 21, i32 23, i32 undef, i32 undef, i32 29, i32 31, i32 undef, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13>
- store ppc_fp128 0xM00000000000000000000000000000000, ptr %A4
+ store ppc_fp128 f0x00000000000000000000000000000000, ptr %A4
br i1 undef, label %CF77, label %CF78
CF78: ; preds = %CF78, %CF81
diff --git a/llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll b/llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll
index 4a036a7868c1a9..0e3102763c401a 100644
--- a/llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll
+++ b/llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll
@@ -3513,7 +3513,7 @@ define dso_local ppc_fp128 @v2ppcf128(<2 x ppc_fp128> %a) local_unnamed_addr #0
; PWR10BE-NEXT: mtlr r0
; PWR10BE-NEXT: blr
entry:
- %0 = call ppc_fp128 @llvm.vector.reduce.fadd.v2ppcf128(ppc_fp128 0xM80000000000000000000000000000000, <2 x ppc_fp128> %a)
+ %0 = call ppc_fp128 @llvm.vector.reduce.fadd.v2ppcf128(ppc_fp128 f0x00000000000000008000000000000000, <2 x ppc_fp128> %a)
ret ppc_fp128 %0
}
@@ -3688,7 +3688,7 @@ define dso_local ppc_fp128 @v2ppcf128_fast(<2 x ppc_fp128> %a) local_unnamed_add
; PWR10BE-NEXT: mtlr r0
; PWR10BE-NEXT: blr
entry:
- %0 = call fast ppc_fp128 @llvm.vector.reduce.fadd.v2ppcf128(ppc_fp128 0xM80000000000000000000000000000000, <2 x ppc_fp128> %a)
+ %0 = call fast ppc_fp128 @llvm.vector.reduce.fadd.v2ppcf128(ppc_fp128 f0x00000000000000008000000000000000, <2 x ppc_fp128> %a)
ret ppc_fp128 %0
}
@@ -3818,7 +3818,7 @@ define dso_local ppc_fp128 @v4ppcf128(<4 x ppc_fp128> %a) local_unnamed_addr #0
; PWR10BE-NEXT: mtlr r0
; PWR10BE-NEXT: blr
entry:
- %0 = call ppc_fp128 @llvm.vector.reduce.fadd.v4ppcf128(ppc_fp128 0xM80000000000000000000000000000000, <4 x ppc_fp128> %a)
+ %0 = call ppc_fp128 @llvm.vector.reduce.fadd.v4ppcf128(ppc_fp128 f0x00000000000000008000000000000000, <4 x ppc_fp128> %a)
ret ppc_fp128 %0
}
@@ -4197,7 +4197,7 @@ define dso_local ppc_fp128 @v4ppcf128_fast(<4 x ppc_fp128> %a) local_unnamed_add
; PWR10BE-NEXT: mtlr r0
; PWR10BE-NEXT: blr
entry:
- %0 = call fast ppc_fp128 @llvm.vector.reduce.fadd.v4ppcf128(ppc_fp128 0xM80000000000000000000000000000000, <4 x ppc_fp128> %a)
+ %0 = call fast ppc_fp128 @llvm.vector.reduce.fadd.v4ppcf128(ppc_fp128 f0x00000000000000008000000000000000, <4 x ppc_fp128> %a)
ret ppc_fp128 %0
}
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll b/llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll
index e29c450c26cb4b..46a47a4083bf28 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll
@@ -146,7 +146,7 @@ define fp128 @constant(fp128 %x) nounwind {
; CHECK-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
- %a = fadd fp128 %x, 0xL00000000000000007FFF000000000000
+ %a = fadd fp128 %x, f0x7FFF0000000000000000000000000000
ret fp128 %a
}
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
index 8951e373ba7a96..9bcb6a9679d21c 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
@@ -23,7 +23,7 @@ body: |
; RV64-NEXT: [[FMV_H_X:%[0-9]+]]:fpr16 = FMV_H_X [[ADDIW]]
; RV64-NEXT: $f10_h = COPY [[FMV_H_X]]
; RV64-NEXT: PseudoRET implicit $f10_h
- %0:fprb(s16) = G_FCONSTANT half 0xH4248
+ %0:fprb(s16) = G_FCONSTANT half f0x4248
$f10_h = COPY %0(s16)
PseudoRET implicit $f10_h
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll
index 51809d00699103..d043f49fe45efe 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll
@@ -865,8 +865,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV32I-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; RV32I-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $x11
; RV32I-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[COPY1]](s32)
- ; RV32I-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV32I-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV32I-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV32I-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV32I-NEXT: ADJCALLSTACKDOWN 4, 0, implicit-def $x2, implicit $x2
; RV32I-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[TRUNC]](s16)
; RV32I-NEXT: [[ANYEXT1:%[0-9]+]]:_(s32) = G_ANYEXT [[C]](s16)
@@ -905,8 +905,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV32IF-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; RV32IF-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $f11_f
; RV32IF-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[COPY1]](s32)
- ; RV32IF-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV32IF-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV32IF-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV32IF-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV32IF-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; RV32IF-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[TRUNC]](s16)
; RV32IF-NEXT: [[ANYEXT1:%[0-9]+]]:_(s32) = G_ANYEXT [[C]](s16)
@@ -940,8 +940,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV32IZFH-NEXT: {{ $}}
; RV32IZFH-NEXT: [[COPY:%[0-9]+]]:_(s16) = COPY $f10_h
; RV32IZFH-NEXT: [[COPY1:%[0-9]+]]:_(s16) = COPY $f11_h
- ; RV32IZFH-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV32IZFH-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV32IZFH-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV32IZFH-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV32IZFH-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; RV32IZFH-NEXT: $f10_h = COPY [[COPY]](s16)
; RV32IZFH-NEXT: $f11_h = COPY [[C]](s16)
@@ -967,8 +967,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV64I-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s64)
; RV64I-NEXT: [[COPY1:%[0-9]+]]:_(s64) = COPY $x11
; RV64I-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[COPY1]](s64)
- ; RV64I-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV64I-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV64I-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV64I-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV64I-NEXT: ADJCALLSTACKDOWN 8, 0, implicit-def $x2, implicit $x2
; RV64I-NEXT: [[ANYEXT:%[0-9]+]]:_(s64) = G_ANYEXT [[TRUNC]](s16)
; RV64I-NEXT: [[ANYEXT1:%[0-9]+]]:_(s64) = G_ANYEXT [[C]](s16)
@@ -1007,8 +1007,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV64IF-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[COPY]](s32)
; RV64IF-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $f11_f
; RV64IF-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[COPY1]](s32)
- ; RV64IF-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV64IF-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV64IF-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV64IF-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV64IF-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; RV64IF-NEXT: [[ANYEXT:%[0-9]+]]:_(s32) = G_ANYEXT [[TRUNC]](s16)
; RV64IF-NEXT: [[ANYEXT1:%[0-9]+]]:_(s32) = G_ANYEXT [[C]](s16)
@@ -1042,8 +1042,8 @@ define half @caller_half_return_stack2(half %x, half %y) nounwind {
; RV64IZFH-NEXT: {{ $}}
; RV64IZFH-NEXT: [[COPY:%[0-9]+]]:_(s16) = COPY $f10_h
; RV64IZFH-NEXT: [[COPY1:%[0-9]+]]:_(s16) = COPY $f11_h
- ; RV64IZFH-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH3C00
- ; RV64IZFH-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH4200
+ ; RV64IZFH-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x3C00
+ ; RV64IZFH-NEXT: [[C1:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x4200
; RV64IZFH-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; RV64IZFH-NEXT: $f10_h = COPY [[COPY]](s16)
; RV64IZFH-NEXT: $f11_h = COPY [[C]](s16)
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll
index 3fcaa81e1a5520..76952a699b05ed 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll
@@ -851,7 +851,7 @@ define i32 @caller_large_scalars() nounwind {
; ILP32-LABEL: name: caller_large_scalars
; ILP32: bb.1 (%ir-block.0):
; ILP32-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; ILP32-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; ILP32-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32-NEXT: G_STORE [[C]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -868,7 +868,7 @@ define i32 @caller_large_scalars() nounwind {
; ILP32F-LABEL: name: caller_large_scalars
; ILP32F: bb.1 (%ir-block.0):
; ILP32F-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; ILP32F-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32F-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32F-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; ILP32F-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32F-NEXT: G_STORE [[C]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -885,7 +885,7 @@ define i32 @caller_large_scalars() nounwind {
; ILP32D-LABEL: name: caller_large_scalars
; ILP32D: bb.1 (%ir-block.0):
; ILP32D-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; ILP32D-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32D-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32D-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; ILP32D-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32D-NEXT: G_STORE [[C]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -898,7 +898,7 @@ define i32 @caller_large_scalars() nounwind {
; ILP32D-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $x10
; ILP32D-NEXT: $x10 = COPY [[COPY]](s32)
; ILP32D-NEXT: PseudoRET implicit $x10
- %1 = call i32 @callee_large_scalars(i128 1, fp128 0xL00000000000000007FFF000000000000)
+ %1 = call i32 @callee_large_scalars(i128 1, fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -947,7 +947,7 @@ define i32 @caller_large_scalars_exhausted_regs() nounwind {
; ILP32-NEXT: [[C6:%[0-9]+]]:_(s32) = G_CONSTANT i32 7
; ILP32-NEXT: [[C7:%[0-9]+]]:_(s128) = G_CONSTANT i128 8
; ILP32-NEXT: [[C8:%[0-9]+]]:_(s32) = G_CONSTANT i32 9
- ; ILP32-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32-NEXT: ADJCALLSTACKDOWN 8, 0, implicit-def $x2, implicit $x2
; ILP32-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32-NEXT: G_STORE [[C7]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -985,7 +985,7 @@ define i32 @caller_large_scalars_exhausted_regs() nounwind {
; ILP32F-NEXT: [[C6:%[0-9]+]]:_(s32) = G_CONSTANT i32 7
; ILP32F-NEXT: [[C7:%[0-9]+]]:_(s128) = G_CONSTANT i128 8
; ILP32F-NEXT: [[C8:%[0-9]+]]:_(s32) = G_CONSTANT i32 9
- ; ILP32F-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32F-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32F-NEXT: ADJCALLSTACKDOWN 8, 0, implicit-def $x2, implicit $x2
; ILP32F-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32F-NEXT: G_STORE [[C7]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -1023,7 +1023,7 @@ define i32 @caller_large_scalars_exhausted_regs() nounwind {
; ILP32D-NEXT: [[C6:%[0-9]+]]:_(s32) = G_CONSTANT i32 7
; ILP32D-NEXT: [[C7:%[0-9]+]]:_(s128) = G_CONSTANT i128 8
; ILP32D-NEXT: [[C8:%[0-9]+]]:_(s32) = G_CONSTANT i32 9
- ; ILP32D-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; ILP32D-NEXT: [[C9:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; ILP32D-NEXT: ADJCALLSTACKDOWN 8, 0, implicit-def $x2, implicit $x2
; ILP32D-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p0) = G_FRAME_INDEX %stack.0
; ILP32D-NEXT: G_STORE [[C7]](s128), [[FRAME_INDEX]](p0) :: (store (s128) into %stack.0, align 8)
@@ -1051,7 +1051,7 @@ define i32 @caller_large_scalars_exhausted_regs() nounwind {
; ILP32D-NEXT: PseudoRET implicit $x10
%1 = call i32 @callee_large_scalars_exhausted_regs(
i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i128 8, i32 9,
- fp128 0xL00000000000000007FFF000000000000)
+ fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -1246,10 +1246,10 @@ define fp128 @callee_large_scalar_ret() nounwind {
; RV32I-NEXT: liveins: $x10
; RV32I-NEXT: {{ $}}
; RV32I-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
- ; RV32I-NEXT: [[C:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; RV32I-NEXT: [[C:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; RV32I-NEXT: G_STORE [[C]](s128), [[COPY]](p0) :: (store (s128))
; RV32I-NEXT: PseudoRET
- ret fp128 0xL00000000000000007FFF000000000000
+ ret fp128 f0x7FFF0000000000000000000000000000
}
define void @caller_large_scalar_ret() nounwind {
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll
index 17c6e55fa8d2c6..7b5fb190aaa32e 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll
@@ -108,7 +108,7 @@ define i64 @caller_i128_fp128_in_regs() nounwind {
; LP64-LABEL: name: caller_i128_fp128_in_regs
; LP64: bb.1 (%ir-block.0):
; LP64-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; LP64-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; LP64-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; LP64-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; LP64-NEXT: [[UV:%[0-9]+]]:_(s64), [[UV1:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C]](s128)
; LP64-NEXT: [[UV2:%[0-9]+]]:_(s64), [[UV3:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C1]](s128)
@@ -125,7 +125,7 @@ define i64 @caller_i128_fp128_in_regs() nounwind {
; LP64F-LABEL: name: caller_i128_fp128_in_regs
; LP64F: bb.1 (%ir-block.0):
; LP64F-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; LP64F-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; LP64F-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; LP64F-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; LP64F-NEXT: [[UV:%[0-9]+]]:_(s64), [[UV1:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C]](s128)
; LP64F-NEXT: [[UV2:%[0-9]+]]:_(s64), [[UV3:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C1]](s128)
@@ -142,7 +142,7 @@ define i64 @caller_i128_fp128_in_regs() nounwind {
; LP64D-LABEL: name: caller_i128_fp128_in_regs
; LP64D: bb.1 (%ir-block.0):
; LP64D-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 1
- ; LP64D-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; LP64D-NEXT: [[C1:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; LP64D-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $x2, implicit $x2
; LP64D-NEXT: [[UV:%[0-9]+]]:_(s64), [[UV1:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C]](s128)
; LP64D-NEXT: [[UV2:%[0-9]+]]:_(s64), [[UV3:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C1]](s128)
@@ -155,7 +155,7 @@ define i64 @caller_i128_fp128_in_regs() nounwind {
; LP64D-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $x10
; LP64D-NEXT: $x10 = COPY [[COPY]](s64)
; LP64D-NEXT: PseudoRET implicit $x10
- %1 = call i64 @callee_i128_fp128_in_regs(i128 1, fp128 0xL00000000000000007FFF000000000000)
+ %1 = call i64 @callee_i128_fp128_in_regs(i128 1, fp128 f0x7FFF0000000000000000000000000000)
ret i64 %1
}
@@ -910,12 +910,12 @@ define i64 @caller_small_scalar_ret() nounwind {
define fp128 @callee_fp128_ret() nounwind {
; RV64I-LABEL: name: callee_fp128_ret
; RV64I: bb.1 (%ir-block.0):
- ; RV64I-NEXT: [[C:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 0xL00000000000000007FFF000000000000
+ ; RV64I-NEXT: [[C:%[0-9]+]]:_(s128) = G_FCONSTANT fp128 f0x7FFF0000000000000000000000000000
; RV64I-NEXT: [[UV:%[0-9]+]]:_(s64), [[UV1:%[0-9]+]]:_(s64) = G_UNMERGE_VALUES [[C]](s128)
; RV64I-NEXT: $x10 = COPY [[UV]](s64)
; RV64I-NEXT: $x11 = COPY [[UV1]](s64)
; RV64I-NEXT: PseudoRET implicit $x10, implicit $x11
- ret fp128 0xL00000000000000007FFF000000000000
+ ret fp128 f0x7FFF0000000000000000000000000000
}
define void @caller_fp128_ret() nounwind {
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll
index 6a1c3ca2b0b674..241a9802ec4050 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll
@@ -500,14 +500,14 @@ define <vscale x 8 x i64> @splat_zero_nxv8i64() {
define <vscale x 1 x half> @splat_zero_nxv1half() {
; RV32-LABEL: name: splat_zero_nxv1half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 1 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 1 x s16>)
; RV32-NEXT: PseudoRET implicit $v8
;
; RV64-LABEL: name: splat_zero_nxv1half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 1 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 1 x s16>)
; RV64-NEXT: PseudoRET implicit $v8
@@ -517,14 +517,14 @@ define <vscale x 1 x half> @splat_zero_nxv1half() {
define <vscale x 2 x half> @splat_zero_nxv2half() {
; RV32-LABEL: name: splat_zero_nxv2half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 2 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 2 x s16>)
; RV32-NEXT: PseudoRET implicit $v8
;
; RV64-LABEL: name: splat_zero_nxv2half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 2 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 2 x s16>)
; RV64-NEXT: PseudoRET implicit $v8
@@ -534,14 +534,14 @@ define <vscale x 2 x half> @splat_zero_nxv2half() {
define <vscale x 4 x half> @splat_zero_nxv4half() {
; RV32-LABEL: name: splat_zero_nxv4half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 4 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 4 x s16>)
; RV32-NEXT: PseudoRET implicit $v8
;
; RV64-LABEL: name: splat_zero_nxv4half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 4 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8 = COPY [[SPLAT_VECTOR]](<vscale x 4 x s16>)
; RV64-NEXT: PseudoRET implicit $v8
@@ -551,14 +551,14 @@ define <vscale x 4 x half> @splat_zero_nxv4half() {
define <vscale x 8 x half> @splat_zero_nxv8half() {
; RV32-LABEL: name: splat_zero_nxv8half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 8 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8m2 = COPY [[SPLAT_VECTOR]](<vscale x 8 x s16>)
; RV32-NEXT: PseudoRET implicit $v8m2
;
; RV64-LABEL: name: splat_zero_nxv8half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 8 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8m2 = COPY [[SPLAT_VECTOR]](<vscale x 8 x s16>)
; RV64-NEXT: PseudoRET implicit $v8m2
@@ -568,14 +568,14 @@ define <vscale x 8 x half> @splat_zero_nxv8half() {
define <vscale x 16 x half> @splat_zero_nxv16half() {
; RV32-LABEL: name: splat_zero_nxv16half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 16 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8m4 = COPY [[SPLAT_VECTOR]](<vscale x 16 x s16>)
; RV32-NEXT: PseudoRET implicit $v8m4
;
; RV64-LABEL: name: splat_zero_nxv16half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 16 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8m4 = COPY [[SPLAT_VECTOR]](<vscale x 16 x s16>)
; RV64-NEXT: PseudoRET implicit $v8m4
@@ -585,14 +585,14 @@ define <vscale x 16 x half> @splat_zero_nxv16half() {
define <vscale x 32 x half> @splat_zero_nxv32half() {
; RV32-LABEL: name: splat_zero_nxv32half
; RV32: bb.1 (%ir-block.0):
- ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV32-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV32-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 32 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV32-NEXT: $v8m8 = COPY [[SPLAT_VECTOR]](<vscale x 32 x s16>)
; RV32-NEXT: PseudoRET implicit $v8m8
;
; RV64-LABEL: name: splat_zero_nxv32half
; RV64: bb.1 (%ir-block.0):
- ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half 0xH0000
+ ; RV64-NEXT: [[C:%[0-9]+]]:_(s16) = G_FCONSTANT half f0x0000
; RV64-NEXT: [[SPLAT_VECTOR:%[0-9]+]]:_(<vscale x 32 x s16>) = G_SPLAT_VECTOR [[C]](s16)
; RV64-NEXT: $v8m8 = COPY [[SPLAT_VECTOR]](<vscale x 32 x s16>)
; RV64-NEXT: PseudoRET implicit $v8m8
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll b/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll
index 9387b7ef4c32ec..149587b8816196 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll
@@ -236,7 +236,7 @@ define void @caller_aligned_stack() nounwind {
; RV32I-WITHFP-NEXT: addi sp, sp, 64
; RV32I-WITHFP-NEXT: ret
%1 = call i32 @callee_aligned_stack(i32 1, i32 11,
- fp128 0xLEB851EB851EB851F400091EB851EB851, i32 12, i32 13,
+ fp128 f0x400091EB851EB851EB851EB851EB851F, i32 12, i32 13,
i64 20000000000, i32 14, i32 15, double 2.720000e+00, i32 16,
[2 x i32] [i32 17, i32 18])
ret void
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll b/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll
index 18916dd69eb43a..17a484a5896c51 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll
@@ -285,7 +285,7 @@ define i32 @caller_large_scalars() nounwind {
; RV32I-WITHFP-NEXT: lw s0, 40(sp) # 4-byte Folded Reload
; RV32I-WITHFP-NEXT: addi sp, sp, 48
; RV32I-WITHFP-NEXT: ret
- %1 = call i32 @callee_large_scalars(i128 1, fp128 0xL00000000000000007FFF000000000000)
+ %1 = call i32 @callee_large_scalars(i128 1, fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -415,7 +415,7 @@ define i32 @caller_large_scalars_exhausted_regs() nounwind {
; RV32I-WITHFP-NEXT: ret
%1 = call i32 @callee_large_scalars_exhausted_regs(
i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i128 8, i32 9,
- fp128 0xL00000000000000007FFF000000000000)
+ fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -745,7 +745,7 @@ define void @caller_aligned_stack() nounwind {
; RV32I-WITHFP-NEXT: addi sp, sp, 64
; RV32I-WITHFP-NEXT: ret
%1 = call i32 @callee_aligned_stack(i32 1, i32 11,
- fp128 0xLEB851EB851EB851F400091EB851EB851, i32 12, i32 13,
+ fp128 f0x400091EB851EB851EB851EB851EB851F, i32 12, i32 13,
i64 20000000000, i32 14, i32 15, i64 16, i32 17,
[2 x i32] [i32 18, i32 19])
ret void
@@ -902,7 +902,7 @@ define fp128 @callee_large_scalar_ret() nounwind {
; RV32I-WITHFP-NEXT: lw s0, 8(sp) # 4-byte Folded Reload
; RV32I-WITHFP-NEXT: addi sp, sp, 16
; RV32I-WITHFP-NEXT: ret
- ret fp128 0xL00000000000000007FFF000000000000
+ ret fp128 f0x7FFF0000000000000000000000000000
}
define void @caller_large_scalar_ret() nounwind {
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll b/llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll
index e16bed5400300b..6317ff625fc408 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll
@@ -911,7 +911,7 @@ define void @caller_aligned_stack() {
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: .cfi_def_cfa_offset 8
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: tail __riscv_restore_1
%1 = call i32 @callee_aligned_stack(i32 1, i32 11,
- fp128 0xLEB851EB851EB851F400091EB851EB851, i32 12, i32 13,
+ fp128 f0x400091EB851EB851EB851EB851EB851F, i32 12, i32 13,
i64 20000000000, i32 14, i32 15, double 2.720000e+00, i32 16,
[2 x i32] [i32 17, i32 18])
ret void
@@ -1619,7 +1619,7 @@ define i32 @caller_large_scalars() {
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: addi sp, sp, 40
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: .cfi_def_cfa_offset 8
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: tail __riscv_restore_1
- %1 = call i32 @callee_large_scalars(i128 1, fp128 0xL00000000000000007FFF000000000000)
+ %1 = call i32 @callee_large_scalars(i128 1, fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -1921,7 +1921,7 @@ define i32 @caller_large_scalars_exhausted_regs() {
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: tail __riscv_restore_1
%1 = call i32 @callee_large_scalars_exhausted_regs(
i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i128 8, i32 9,
- fp128 0xL00000000000000007FFF000000000000)
+ fp128 f0x7FFF0000000000000000000000000000)
ret i32 %1
}
@@ -2517,7 +2517,7 @@ define fp128 @callee_large_scalar_ret() {
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: sw a1, 12(a0)
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: .cfi_def_cfa sp, 8
; ILP32E-WITHFP-SAVE-RESTORE-NEXT: tail __riscv_restore_1
- ret fp128 0xL00000000000000007FFF000000000000
+ ret fp128 f0x7FFF0000000000000000000000000000
}
define void @caller_large_scalar_ret() {
diff --git a/llvm/test/CodeGen/RISCV/fp128.ll b/llvm/test/CodeGen/RISCV/fp128.ll
index a8e26f7686e50d..534b379dcae326 100644
--- a/llvm/test/CodeGen/RISCV/fp128.ll
+++ b/llvm/test/CodeGen/RISCV/fp128.ll
@@ -2,8 +2,8 @@
; RUN: llc -mtriple=riscv32 -verify-machineinstrs < %s \
; RUN: | FileCheck -check-prefix=RV32I %s
- at x = local_unnamed_addr global fp128 0xL00000000000000007FFF000000000000, align 16
- at y = local_unnamed_addr global fp128 0xL00000000000000007FFF000000000000, align 16
+ at x = local_unnamed_addr global fp128 f0x7FFF0000000000000000000000000000, align 16
+ at y = local_unnamed_addr global fp128 f0x7FFF0000000000000000000000000000, align 16
; Besides anything else, these tests help verify that libcall ABI lowering
; works correctly
diff --git a/llvm/test/CodeGen/RISCV/half-zfa-fli.ll b/llvm/test/CodeGen/RISCV/half-zfa-fli.ll
index 281a873235623b..2350c39b442d1c 100644
--- a/llvm/test/CodeGen/RISCV/half-zfa-fli.ll
+++ b/llvm/test/CodeGen/RISCV/half-zfa-fli.ll
@@ -89,7 +89,7 @@ define half @loadfpimm6() {
; ZFHMIN-NEXT: lui a0, %hi(.LCPI5_0)
; ZFHMIN-NEXT: flh fa0, %lo(.LCPI5_0)(a0)
; ZFHMIN-NEXT: ret
- ret half 0xH7C00
+ ret half f0x7C00
}
define half @loadfpimm7() {
@@ -103,7 +103,7 @@ define half @loadfpimm7() {
; ZFHMIN-NEXT: lui a0, %hi(.LCPI6_0)
; ZFHMIN-NEXT: flh fa0, %lo(.LCPI6_0)(a0)
; ZFHMIN-NEXT: ret
- ret half 0xH7E00
+ ret half f0x7E00
}
define half @loadfpimm8() {
@@ -117,7 +117,7 @@ define half @loadfpimm8() {
; ZFHMIN-NEXT: li a0, 1024
; ZFHMIN-NEXT: fmv.h.x fa0, a0
; ZFHMIN-NEXT: ret
- ret half 0xH0400
+ ret half f0x0400
}
define half @loadfpimm9() {
@@ -147,7 +147,7 @@ define half @loadfpimm10() {
; ZFHMIN-NEXT: li a0, 256
; ZFHMIN-NEXT: fmv.h.x fa0, a0
; ZFHMIN-NEXT: ret
- ret half 0xH0100
+ ret half f0x0100
}
; This is 1 * 2^-15
@@ -162,7 +162,7 @@ define half @loadfpimm11() {
; ZFHMIN-NEXT: li a0, 512
; ZFHMIN-NEXT: fmv.h.x fa0, a0
; ZFHMIN-NEXT: ret
- ret half 0xH0200
+ ret half f0x0200
}
; Negative test. This is an snan with payload of 1.
@@ -178,7 +178,7 @@ define half @loadfpimm12() {
; ZFHMIN-NEXT: lui a0, %hi(.LCPI11_0)
; ZFHMIN-NEXT: flh fa0, %lo(.LCPI11_0)(a0)
; ZFHMIN-NEXT: ret
- ret half 0xH7c01
+ ret half f0x7c01
}
define half @loadfpimm13() {
@@ -225,5 +225,5 @@ define half @loadfpimm15() {
; ZFHMIN-NEXT: lui a0, %hi(.LCPI14_0)
; ZFHMIN-NEXT: flh fa0, %lo(.LCPI14_0)(a0)
; ZFHMIN-NEXT: ret
- ret half 0xH8400
+ ret half f0x8400
}
diff --git a/llvm/test/CodeGen/RISCV/stack-store-check.ll b/llvm/test/CodeGen/RISCV/stack-store-check.ll
index cd1aebfea5ce4e..eccbaec11c566d 100644
--- a/llvm/test/CodeGen/RISCV/stack-store-check.ll
+++ b/llvm/test/CodeGen/RISCV/stack-store-check.ll
@@ -304,23 +304,23 @@ define void @main() local_unnamed_addr nounwind {
; CHECK-NEXT: addi sp, sp, 704
; CHECK-NEXT: ret
%1 = load fp128, ptr @U, align 16
- %2 = fsub fp128 0xL00000000000000000000000000000000, %1
+ %2 = fsub fp128 f0x00000000000000000000000000000000, %1
%3 = fsub fp128 %2, %1
- %4 = fadd fp128 %1, 0xL00000000000000000000000000000000
+ %4 = fadd fp128 %1, f0x00000000000000000000000000000000
%5 = load fp128, ptr @Y1, align 16
%6 = fmul fp128 %2, %5
%7 = fadd fp128 %1, %4
- %8 = fsub fp128 0xL00000000000000000000000000000000, %7
+ %8 = fsub fp128 f0x00000000000000000000000000000000, %7
store fp128 %8, ptr @X, align 16
%9 = fmul fp128 %3, %5
- %10 = fmul fp128 0xL00000000000000000000000000000000, %4
+ %10 = fmul fp128 f0x00000000000000000000000000000000, %4
store fp128 %10, ptr @S, align 16
%11 = fsub fp128 %6, %3
store fp128 %11, ptr @T, align 16
- %12 = fadd fp128 0xL00000000000000000000000000000000, %9
+ %12 = fadd fp128 f0x00000000000000000000000000000000, %9
store fp128 %12, ptr @Y, align 16
- %13 = fmul fp128 0xL00000000000000000000000000000000, %5
- %14 = fadd fp128 %13, 0xL0000000000000000BFFE000000000000
+ %13 = fmul fp128 f0x00000000000000000000000000000000, %5
+ %14 = fadd fp128 %13, f0xBFFE0000000000000000000000000000
store fp128 %14, ptr @Y1, align 16
ret void
}
diff --git a/llvm/test/CodeGen/RISCV/tail-calls.ll b/llvm/test/CodeGen/RISCV/tail-calls.ll
index 366b37ac5d4720..0f2fb9e642db78 100644
--- a/llvm/test/CodeGen/RISCV/tail-calls.ll
+++ b/llvm/test/CodeGen/RISCV/tail-calls.ll
@@ -290,7 +290,7 @@ define void @caller_indirect_args() nounwind {
; CHECK-LARGE-ZICFILP-NEXT: addi sp, sp, 32
; CHECK-LARGE-ZICFILP-NEXT: ret
entry:
- %call = tail call i32 @callee_indirect_args(fp128 0xL00000000000000003FFF000000000000)
+ %call = tail call i32 @callee_indirect_args(fp128 f0x3FFF0000000000000000000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/RISCV/vararg.ll b/llvm/test/CodeGen/RISCV/vararg.ll
index 895d84b38be321..11eb0460cea825 100644
--- a/llvm/test/CodeGen/RISCV/vararg.ll
+++ b/llvm/test/CodeGen/RISCV/vararg.ll
@@ -2631,7 +2631,7 @@ define void @va5_aligned_stack_caller() nounwind {
; LP64E-WITHFP-NEXT: addi sp, sp, 64
; LP64E-WITHFP-NEXT: ret
%1 = call i32 (i32, ...) @va5_aligned_stack_callee(i32 1, i32 11,
- fp128 0xLEB851EB851EB851F400091EB851EB851, i32 12, i32 13, i64 20000000000,
+ fp128 f0x400091EB851EB851EB851EB851EB851F, i32 12, i32 13, i64 20000000000,
i32 14, double 2.720000e+00, i32 15, [2 x i32] [i32 16, i32 17])
ret void
}
diff --git a/llvm/test/CodeGen/SPARC/fp128-select.ll b/llvm/test/CodeGen/SPARC/fp128-select.ll
index 72038e59b9fc8a..34a60b076d52ed 100644
--- a/llvm/test/CodeGen/SPARC/fp128-select.ll
+++ b/llvm/test/CodeGen/SPARC/fp128-select.ll
@@ -40,7 +40,7 @@ entry:
%0 = bitcast fp128 %b to i128
%xor.i = xor i128 %0, 0
%cmp19.i = icmp eq i128 %xor.i, -170141183460469231731687303715884105728
- %spec.select277.i = select i1 %cmp19.i, fp128 0xL00000000000000007FFF800000000000, fp128 %a
+ %spec.select277.i = select i1 %cmp19.i, fp128 f0x7FFF8000000000000000000000000000, fp128 %a
ret fp128 %spec.select277.i
}
@@ -76,7 +76,7 @@ entry:
%0 = bitcast fp128 %b to i128
%xor.i = xor i128 %0, 0
%cmp19.i = icmp eq i128 %xor.i, -170141183460469231731687303715884105728
- %spec.select277.i = select i1 %cmp19.i, fp128 0xL00000000000000007FFF800000000000, fp128 %a
+ %spec.select277.i = select i1 %cmp19.i, fp128 f0x7FFF8000000000000000000000000000, fp128 %a
ret fp128 %spec.select277.i
}
diff --git a/llvm/test/CodeGen/SPARC/fp128.ll b/llvm/test/CodeGen/SPARC/fp128.ll
index 521e33399ac280..f7d4a2585a4478 100644
--- a/llvm/test/CodeGen/SPARC/fp128.ll
+++ b/llvm/test/CodeGen/SPARC/fp128.ll
@@ -90,7 +90,7 @@ entry:
define i32 @f128_compare2(ptr byval(fp128) %f0) {
entry:
%0 = load fp128, ptr %f0, align 8
- %1 = fcmp ogt fp128 %0, 0xL00000000000000000000000000000000
+ %1 = fcmp ogt fp128 %0, f0x00000000000000000000000000000000
br i1 %1, label %"5", label %"7"
"5": ; preds = %entry
@@ -237,7 +237,7 @@ entry:
define void @f128_neg(ptr noalias sret(fp128) %scalar.result, ptr byval(fp128) %a) {
entry:
%0 = load fp128, ptr %a, align 8
- %1 = fsub fp128 0xL00000000000000008000000000000000, %0
+ %1 = fsub fp128 f0x80000000000000000000000000000000, %0
store fp128 %1, ptr %scalar.result, align 8
ret void
}
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll
index 1391fddfcdb369..344d01e58b7d7d 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll
@@ -279,7 +279,7 @@ entry:
%dst.addr = alloca ptr addrspace(1), align 4
%v = alloca half, align 2
store ptr addrspace(1) %dst, ptr %dst.addr, align 4
- store half 0xH0000, ptr %v, align 2
+ store half f0x0000, ptr %v, align 2
%value = load half, ptr %v, align 2
; CHECK: OpGroupNonUniformRotateKHR %[[TyHalf]] %[[ScopeSubgroup]] %[[#]] %[[ConstInt2]]
%call = call spir_func half @_Z16sub_group_rotateDhi(half noundef %value, i32 noundef 2) #2
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll
index 96e74149f44dbb..ecd7e6a99172d8 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll
@@ -44,7 +44,7 @@ entry:
%res5 = tail call spir_func i1 @_Z25__spirv_GroupLogicalOrKHR(i32 2, i32 0, i1 false)
%res6 = tail call spir_func i1 @_Z26__spirv_GroupLogicalXorKHR(i32 2, i32 0, i1 false)
%res7 = tail call spir_func i32 @_Z20__spirv_GroupIMulKHR(i32 2, i32 0, i32 0)
- %res8 = tail call spir_func half @_Z20__spirv_GroupFMulKHR(i32 2, i32 0, half 0xH0000)
+ %res8 = tail call spir_func half @_Z20__spirv_GroupFMulKHR(i32 2, i32 0, half f0x0000)
ret void
}
@@ -57,7 +57,7 @@ entry:
%res5 = tail call spir_func i32 @_Z28work_group_reduce_logical_ori(i32 0)
%res6 = tail call spir_func i32 @_Z29work_group_reduce_logical_xori(i32 0)
%res7 = tail call spir_func i32 @_Z21work_group_reduce_muli(i32 0)
- %res8 = tail call spir_func half @_Z21work_group_reduce_mulDh(half 0xH0000)
+ %res8 = tail call spir_func half @_Z21work_group_reduce_mulDh(half f0x0000)
ret void
}
diff --git a/llvm/test/CodeGen/SPIRV/half_extension.ll b/llvm/test/CodeGen/SPIRV/half_extension.ll
index b30e5514c95bea..b15d730ac56969 100644
--- a/llvm/test/CodeGen/SPIRV/half_extension.ll
+++ b/llvm/test/CodeGen/SPIRV/half_extension.ll
@@ -16,7 +16,7 @@ define spir_func half @test() {
entry:
%x = alloca half, align 2
%y = alloca half, align 2
- store half 0xH2E66, half* %x, align 2
+ store half f0x2E66, half* %x, align 2
%0 = load half, half* %x, align 2
%conv = fpext half %0 to float
%add = fadd float %conv, 2.000000e+00
diff --git a/llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll b/llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll
index 056673fa9d5a52..6452c57f100301 100644
--- a/llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll
+++ b/llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll
@@ -33,7 +33,7 @@ define spir_func noundef half @test_rcp_half(half noundef %p0) #0 {
entry:
; CHECK: %[[#arg0:]] = OpFunctionParameter %[[#float_16]]
; CHECK: OpFDiv %[[#float_16]] %[[#const_f16_1]] %[[#arg0]]
- %hlsl.rcp = fdiv half 0xH3C00, %p0
+ %hlsl.rcp = fdiv half f0x3C00, %p0
ret half %hlsl.rcp
}
@@ -41,7 +41,7 @@ define spir_func noundef <2 x half> @test_rcp_half2(<2 x half> noundef %p0) #0 {
entry:
; CHECK: %[[#arg0:]] = OpFunctionParameter %[[#vec2_float_16]]
; CHECK: OpFDiv %[[#vec2_float_16]] %[[#vec2_const_ones_f16]] %[[#arg0]]
- %hlsl.rcp = fdiv <2 x half> <half 0xH3C00, half 0xH3C00>, %p0
+ %hlsl.rcp = fdiv <2 x half> <half f0x3C00, half f0x3C00>, %p0
ret <2 x half> %hlsl.rcp
}
@@ -49,7 +49,7 @@ define spir_func noundef <3 x half> @test_rcp_half3(<3 x half> noundef %p0) #0 {
entry:
; CHECK: %[[#arg0:]] = OpFunctionParameter %[[#vec3_float_16]]
; CHECK: OpFDiv %[[#vec3_float_16]] %[[#vec3_const_ones_f16]] %[[#arg0]]
- %hlsl.rcp = fdiv <3 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00>, %p0
+ %hlsl.rcp = fdiv <3 x half> <half f0x3C00, half f0x3C00, half f0x3C00>, %p0
ret <3 x half> %hlsl.rcp
}
@@ -57,7 +57,7 @@ define spir_func noundef <4 x half> @test_rcp_half4(<4 x half> noundef %p0) #0 {
entry:
; CHECK: %[[#arg0:]] = OpFunctionParameter %[[#vec4_float_16]]
; CHECK: OpFDiv %[[#vec4_float_16]] %[[#vec4_const_ones_f16]] %[[#arg0]]
- %hlsl.rcp = fdiv <4 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>, %p0
+ %hlsl.rcp = fdiv <4 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>, %p0
ret <4 x half> %hlsl.rcp
}
diff --git a/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll b/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
index 6a4b4f593bf3b8..8ac55840373e86 100644
--- a/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
+++ b/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
@@ -290,10 +290,10 @@ define dso_local spir_kernel void @test_wrappers(ptr addrspace(4) %arg, i64 %arg
%r14 = call spir_func <4 x i32> @_Z22__spirv_SConvert_Rint2Dv2_a(<4 x i8> %arg_v2)
%r15 = call spir_func float @_Z30__spirv_ConvertUToF_Rfloat_rtz(i64 %arg_ptr)
%r16 = call spir_func float @__spirv_ConvertUToF_Rfloat_rtz(i64 %arg_ptr)
- %r17 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtzDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
- %r18 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rteDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
- %r19 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtpDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
- %r20 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtnDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
+ %r17 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtzDv2_DF16_(<2 x half> noundef <half f0x409A, half f0x439A>)
+ %r18 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rteDv2_DF16_(<2 x half> noundef <half f0x409A, half f0x439A>)
+ %r19 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtpDv2_DF16_(<2 x half> noundef <half f0x409A, half f0x439A>)
+ %r20 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtnDv2_DF16_(<2 x half> noundef <half f0x409A, half f0x439A>)
%r21 = call spir_func i8 @_Z30__spirv_ConvertFToU_Ruchar_satf(float noundef 42.0)
ret void
}
diff --git a/llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll b/llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll
index 8e29876d61d339..f86a5cc3871532 100644
--- a/llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll
+++ b/llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll
@@ -15,11 +15,11 @@ entry:
%locidx = addrspacecast ptr addrspace(3) %_arg_loc to ptr addrspace(4)
%ptr1 = tail call spir_func noundef ptr addrspace(3) @_Z40__spirv_GenericCastToPtrExplicit_ToLocalPvi(ptr addrspace(4) noundef %locidx, i32 noundef 4)
- %sincos_r = tail call spir_func noundef half @_Z18__spirv_ocl_sincosDF16_PU3AS3DF16_(half noundef 0xH3145, ptr addrspace(3) noundef %ptr1)
+ %sincos_r = tail call spir_func noundef half @_Z18__spirv_ocl_sincosDF16_PU3AS3DF16_(half noundef f0x3145, ptr addrspace(3) noundef %ptr1)
%p1 = addrspacecast ptr addrspace(1) %_acc to ptr addrspace(4)
%ptr2 = tail call spir_func noundef ptr addrspace(1) @_Z41__spirv_GenericCastToPtrExplicit_ToGlobalPvi(ptr addrspace(4) noundef %p1, i32 noundef 5)
- %remquo_r = tail call spir_func noundef half @_Z18__spirv_ocl_remquoDF16_DF16_PU3AS1i(half noundef 0xH3A37, half noundef 0xH32F4, ptr addrspace(1) noundef %ptr2)
+ %remquo_r = tail call spir_func noundef half @_Z18__spirv_ocl_remquoDF16_DF16_PU3AS1i(half noundef f0x3A37, half noundef f0x32F4, ptr addrspace(1) noundef %ptr2)
ret void
}
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll b/llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll
index 8ce76534c50db5..c236389421aed1 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll
@@ -41,7 +41,7 @@ entry:
%4 = call i64 @_Z20__spirv_SpecConstantix(i32 4, i64 3)
store i64 %4, i64 addrspace(1)* %l, align 8
- %5 = call half @_Z20__spirv_SpecConstantih(i32 5, half 0xH3800)
+ %5 = call half @_Z20__spirv_SpecConstantih(i32 5, half f0x3800)
store half %5, half addrspace(1)* %h, align 2
%6 = call float @_Z20__spirv_SpecConstantif(i32 6, float 1.250000e+00)
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll
index a815f5d44969c9..489ed1485e8880 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll
@@ -748,8 +748,8 @@ declare dso_local spir_func float @_Z25sub_group_broadcast_firstf(float) local_u
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testNonUniformBroadcastHalfs() local_unnamed_addr {
- %1 = tail call spir_func half @_Z31sub_group_non_uniform_broadcastDhj(half 0xH0000, i32 0)
- %2 = insertelement <16 x half> <half undef, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000>, half %1, i64 0
+ %1 = tail call spir_func half @_Z31sub_group_non_uniform_broadcastDhj(half f0x0000, i32 0)
+ %2 = insertelement <16 x half> <half undef, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000>, half %1, i64 0
%3 = shufflevector <16 x half> %2, <16 x half> undef, <2 x i32> <i32 0, i32 1>
%4 = tail call spir_func <2 x half> @_Z31sub_group_non_uniform_broadcastDv2_Dhj(<2 x half> %3, i32 0)
%5 = shufflevector <2 x half> %4, <2 x half> undef, <16 x i32> <i32 0, i32 1, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll
index 22bf747490da87..ba291147c0917d 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll
@@ -475,15 +475,15 @@ declare dso_local spir_func float @_Z30sub_group_clustered_reduce_maxfj(float, i
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testClusteredArithmeticHalf(half addrspace(1)* nocapture) local_unnamed_addr {
- %2 = tail call spir_func half @_Z30sub_group_clustered_reduce_addDhj(half 0xH0000, i32 2)
+ %2 = tail call spir_func half @_Z30sub_group_clustered_reduce_addDhj(half f0x0000, i32 2)
store half %2, half addrspace(1)* %0, align 2
- %3 = tail call spir_func half @_Z30sub_group_clustered_reduce_mulDhj(half 0xH0000, i32 2)
+ %3 = tail call spir_func half @_Z30sub_group_clustered_reduce_mulDhj(half f0x0000, i32 2)
%4 = getelementptr inbounds half, half addrspace(1)* %0, i64 1
store half %3, half addrspace(1)* %4, align 2
- %5 = tail call spir_func half @_Z30sub_group_clustered_reduce_minDhj(half 0xH0000, i32 2)
+ %5 = tail call spir_func half @_Z30sub_group_clustered_reduce_minDhj(half f0x0000, i32 2)
%6 = getelementptr inbounds half, half addrspace(1)* %0, i64 2
store half %5, half addrspace(1)* %6, align 2
- %7 = tail call spir_func half @_Z30sub_group_clustered_reduce_maxDhj(half 0xH0000, i32 2)
+ %7 = tail call spir_func half @_Z30sub_group_clustered_reduce_maxDhj(half f0x0000, i32 2)
%8 = getelementptr inbounds half, half addrspace(1)* %0, i64 3
store half %7, half addrspace(1)* %8, align 2
ret void
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll
index 1ba91a2efb6a01..47f27cdd61fada 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll
@@ -707,8 +707,8 @@ declare dso_local spir_func <16 x float> @_Z19sub_group_broadcastDv16_fj(<16 x f
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testBroadcastHalf() local_unnamed_addr {
- %1 = tail call spir_func half @_Z19sub_group_broadcastDhj(half 0xH0000, i32 0)
- %2 = insertelement <16 x half> <half undef, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000>, half %1, i64 0
+ %1 = tail call spir_func half @_Z19sub_group_broadcastDhj(half f0x0000, i32 0)
+ %2 = insertelement <16 x half> <half undef, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000>, half %1, i64 0
%3 = shufflevector <16 x half> %2, <16 x half> undef, <2 x i32> <i32 0, i32 1>
%4 = tail call spir_func <2 x half> @_Z19sub_group_broadcastDv2_Dhj(<2 x half> %3, i32 0)
%5 = shufflevector <2 x half> %4, <2 x half> undef, <16 x i32> <i32 0, i32 1, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll
index adf73fe153dea2..51ad604c83e767 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll
@@ -1084,39 +1084,39 @@ declare dso_local spir_func float @_Z40sub_group_non_uniform_scan_exclusive_maxf
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testNonUniformArithmeticHalf(half addrspace(1)* nocapture) local_unnamed_addr {
- %2 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_addDh(half 0xH0000)
+ %2 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_addDh(half f0x0000)
store half %2, half addrspace(1)* %0, align 2
- %3 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_mulDh(half 0xH0000)
+ %3 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_mulDh(half f0x0000)
%4 = getelementptr inbounds half, half addrspace(1)* %0, i64 1
store half %3, half addrspace(1)* %4, align 2
- %5 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_minDh(half 0xH0000)
+ %5 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_minDh(half f0x0000)
%6 = getelementptr inbounds half, half addrspace(1)* %0, i64 2
store half %5, half addrspace(1)* %6, align 2
- %7 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_maxDh(half 0xH0000)
+ %7 = tail call spir_func half @_Z32sub_group_non_uniform_reduce_maxDh(half f0x0000)
%8 = getelementptr inbounds half, half addrspace(1)* %0, i64 3
store half %7, half addrspace(1)* %8, align 2
- %9 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_addDh(half 0xH0000)
+ %9 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_addDh(half f0x0000)
%10 = getelementptr inbounds half, half addrspace(1)* %0, i64 4
store half %9, half addrspace(1)* %10, align 2
- %11 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_mulDh(half 0xH0000)
+ %11 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_mulDh(half f0x0000)
%12 = getelementptr inbounds half, half addrspace(1)* %0, i64 5
store half %11, half addrspace(1)* %12, align 2
- %13 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_minDh(half 0xH0000)
+ %13 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_minDh(half f0x0000)
%14 = getelementptr inbounds half, half addrspace(1)* %0, i64 6
store half %13, half addrspace(1)* %14, align 2
- %15 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_maxDh(half 0xH0000)
+ %15 = tail call spir_func half @_Z40sub_group_non_uniform_scan_inclusive_maxDh(half f0x0000)
%16 = getelementptr inbounds half, half addrspace(1)* %0, i64 7
store half %15, half addrspace(1)* %16, align 2
- %17 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_addDh(half 0xH0000)
+ %17 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_addDh(half f0x0000)
%18 = getelementptr inbounds half, half addrspace(1)* %0, i64 8
store half %17, half addrspace(1)* %18, align 2
- %19 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_mulDh(half 0xH0000)
+ %19 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_mulDh(half f0x0000)
%20 = getelementptr inbounds half, half addrspace(1)* %0, i64 9
store half %19, half addrspace(1)* %20, align 2
- %21 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_minDh(half 0xH0000)
+ %21 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_minDh(half f0x0000)
%22 = getelementptr inbounds half, half addrspace(1)* %0, i64 10
store half %21, half addrspace(1)* %22, align 2
- %23 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_maxDh(half 0xH0000)
+ %23 = tail call spir_func half @_Z40sub_group_non_uniform_scan_exclusive_maxDh(half f0x0000)
%24 = getelementptr inbounds half, half addrspace(1)* %0, i64 11
store half %23, half addrspace(1)* %24, align 2
ret void
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll
index 183f1d2eeef599..17ff2bdc0bd15c 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll
@@ -166,7 +166,7 @@ define dso_local spir_kernel void @testSubGroupNonUniformAllEqual(i32 addrspace(
store i32 %9, i32 addrspace(1)* %0, align 4
%10 = tail call spir_func i32 @_Z31sub_group_non_uniform_all_equalf(float 0.000000e+00)
store i32 %10, i32 addrspace(1)* %0, align 4
- %11 = tail call spir_func i32 @_Z31sub_group_non_uniform_all_equalDh(half 0xH0000)
+ %11 = tail call spir_func i32 @_Z31sub_group_non_uniform_all_equalDh(half f0x0000)
store i32 %11, i32 addrspace(1)* %0, align 4
%12 = tail call spir_func i32 @_Z31sub_group_non_uniform_all_equald(double 0.000000e+00)
store i32 %12, i32 addrspace(1)* %0, align 4
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll
index b4099849934a17..34f03a963eaef1 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll
@@ -273,9 +273,9 @@ declare dso_local spir_func float @_Z21sub_group_shuffle_xorfj(float, i32) local
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testShuffleHalf(half addrspace(1)* nocapture) local_unnamed_addr {
- %2 = tail call spir_func half @_Z17sub_group_shuffleDhj(half 0xH0000, i32 0)
+ %2 = tail call spir_func half @_Z17sub_group_shuffleDhj(half f0x0000, i32 0)
store half %2, half addrspace(1)* %0, align 2
- %3 = tail call spir_func half @_Z21sub_group_shuffle_xorDhj(half 0xH0000, i32 0)
+ %3 = tail call spir_func half @_Z21sub_group_shuffle_xorDhj(half f0x0000, i32 0)
%4 = getelementptr inbounds half, half addrspace(1)* %0, i64 1
store half %3, half addrspace(1)* %4, align 2
ret void
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll
index f71d5e42330c94..404ff3b8f3c163 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll
@@ -277,9 +277,9 @@ declare dso_local spir_func float @_Z22sub_group_shuffle_downfj(float, i32) loca
; CHECK-SPIRV: OpFunctionEnd
define dso_local spir_kernel void @testShuffleRelativeHalf(half addrspace(1)* nocapture) local_unnamed_addr {
- %2 = tail call spir_func half @_Z20sub_group_shuffle_upDhj(half 0xH0000, i32 0)
+ %2 = tail call spir_func half @_Z20sub_group_shuffle_upDhj(half f0x0000, i32 0)
store half %2, half addrspace(1)* %0, align 2
- %3 = tail call spir_func half @_Z22sub_group_shuffle_downDhj(half 0xH0000, i32 0)
+ %3 = tail call spir_func half @_Z22sub_group_shuffle_downDhj(half f0x0000, i32 0)
%4 = getelementptr inbounds half, half addrspace(1)* %0, i64 1
store half %3, half addrspace(1)* %4, align 2
ret void
diff --git a/llvm/test/CodeGen/SystemZ/args-01.ll b/llvm/test/CodeGen/SystemZ/args-01.ll
index 113110faf34137..8111c81889af2f 100644
--- a/llvm/test/CodeGen/SystemZ/args-01.ll
+++ b/llvm/test/CodeGen/SystemZ/args-01.ll
@@ -66,8 +66,8 @@ define void @foo() {
; CHECK-STACK: brasl %r14, bar at PLT
call void @bar (i8 1, i16 2, i32 3, i64 4, float 0.0, double 0.0,
- fp128 0xL00000000000000000000000000000000, i64 5,
+ fp128 f0x00000000000000000000000000000000, i64 5,
float -0.0, double -0.0, i8 6, i16 7, i32 8, i64 9, float 0.0,
- double 0.0, fp128 0xL00000000000000000000000000000000)
+ double 0.0, fp128 f0x00000000000000000000000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/args-02.ll b/llvm/test/CodeGen/SystemZ/args-02.ll
index cd07b2c91700ba..77afbf7cfde1cf 100644
--- a/llvm/test/CodeGen/SystemZ/args-02.ll
+++ b/llvm/test/CodeGen/SystemZ/args-02.ll
@@ -67,9 +67,9 @@ define void @foo() {
; CHECK-STACK: brasl %r14, bar at PLT
call void @bar (i8 signext -1, i16 signext -2, i32 signext -3, i64 -4, float 0.0, double 0.0,
- fp128 0xL00000000000000000000000000000000, i64 -5,
+ fp128 f0x00000000000000000000000000000000, i64 -5,
float -0.0, double -0.0, i8 signext -6, i16 signext -7, i32 signext -8, i64 -9,
float 0.0, double 0.0,
- fp128 0xL00000000000000000000000000000000)
+ fp128 f0x00000000000000000000000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/args-03.ll b/llvm/test/CodeGen/SystemZ/args-03.ll
index 97d5bcde34b263..2ed5112b946bdd 100644
--- a/llvm/test/CodeGen/SystemZ/args-03.ll
+++ b/llvm/test/CodeGen/SystemZ/args-03.ll
@@ -69,9 +69,9 @@ define void @foo() {
; CHECK-STACK: brasl %r14, bar at PLT
call void @bar (i8 zeroext -1, i16 zeroext -2, i32 zeroext -3, i64 -4, float 0.0, double 0.0,
- fp128 0xL00000000000000000000000000000000, i64 -5,
+ fp128 f0x00000000000000000000000000000000, i64 -5,
float -0.0, double -0.0, i8 zeroext -6, i16 zeroext -7, i32 zeroext -8, i64 -9,
float 0.0, double 0.0,
- fp128 0xL00000000000000000000000000000000)
+ fp128 f0x00000000000000000000000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/asm-10.ll b/llvm/test/CodeGen/SystemZ/asm-10.ll
index b71db8350781de..3d3beb4c1d1530 100644
--- a/llvm/test/CodeGen/SystemZ/asm-10.ll
+++ b/llvm/test/CodeGen/SystemZ/asm-10.ll
@@ -25,6 +25,6 @@ define double @f3() {
; CHECK: lzxr %f1
; CHECK: blah %f0 %f1
; CHECK: br %r14
- %val = call double asm "blah $0 $1", "=&f,f" (fp128 0xL00000000000000000000000000000000)
+ %val = call double asm "blah $0 $1", "=&f,f" (fp128 f0x00000000000000000000000000000000)
ret double %val
}
diff --git a/llvm/test/CodeGen/SystemZ/asm-17.ll b/llvm/test/CodeGen/SystemZ/asm-17.ll
index c9c4d73c66ebb5..124e2b9d0f7868 100644
--- a/llvm/test/CodeGen/SystemZ/asm-17.ll
+++ b/llvm/test/CodeGen/SystemZ/asm-17.ll
@@ -55,7 +55,7 @@ define void @f5(ptr %dest) {
; CHECK-DAG: std %f4, 0(%r2)
; CHECK-DAG: std %f6, 8(%r2)
; CHECK: br %r14
- %ret = call fp128 asm "blah $0", "={f4},0" (fp128 0xL00000000000000000000000000000000)
+ %ret = call fp128 asm "blah $0", "={f4},0" (fp128 f0x00000000000000000000000000000000)
store fp128 %ret, ptr %dest
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/asm-19.ll b/llvm/test/CodeGen/SystemZ/asm-19.ll
index e16fdfa13fce6a..788c91f4a5586c 100644
--- a/llvm/test/CodeGen/SystemZ/asm-19.ll
+++ b/llvm/test/CodeGen/SystemZ/asm-19.ll
@@ -27,7 +27,7 @@ define fp128 @f3() {
; CHECK: blah %v1 %v0
; CHECK: vst %v1, 0(%r2)
; CHECK: br %r14
- %val = call fp128 asm "blah $0 $1", "=&v,v" (fp128 0xL00000000000000000000000000000000)
+ %val = call fp128 asm "blah $0 $1", "=&v,v" (fp128 f0x00000000000000000000000000000000)
ret fp128 %val
}
@@ -112,7 +112,7 @@ define fp128 @f12() {
; CHECK: blah %v4
; CHECK: vst %v4, 0(%r2)
; CHECK: br %r14
- %ret = call fp128 asm "blah $0", "={v4},0" (fp128 0xL00000000000000000000000000000000)
+ %ret = call fp128 asm "blah $0", "={v4},0" (fp128 f0x00000000000000000000000000000000)
ret fp128 %ret
}
diff --git a/llvm/test/CodeGen/SystemZ/call-03.ll b/llvm/test/CodeGen/SystemZ/call-03.ll
index 8cb0a5605809ac..f30f120893e1ac 100644
--- a/llvm/test/CodeGen/SystemZ/call-03.ll
+++ b/llvm/test/CodeGen/SystemZ/call-03.ll
@@ -45,7 +45,7 @@ define void @f3() {
; CHECK-LABEL: f3:
; CHECK: brasl %r14, uses_indirect at PLT
; CHECK: br %r14
- tail call void @uses_indirect(fp128 0xL00000000000000000000000000000000)
+ tail call void @uses_indirect(fp128 f0x00000000000000000000000000000000)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/call-zos-01.ll b/llvm/test/CodeGen/SystemZ/call-zos-01.ll
index 7ad1e4c4679ebc..72899f65f9ecbf 100644
--- a/llvm/test/CodeGen/SystemZ/call-zos-01.ll
+++ b/llvm/test/CodeGen/SystemZ/call-zos-01.ll
@@ -118,7 +118,7 @@ entry:
; CHECK-NEXT: ld 2,8([[GENREG]])
define fp128 @call_longdouble() {
entry:
- %ret = call fp128 (fp128) @pass_longdouble(fp128 0xLE0FC1518450562CD4000921FB5444261)
+ %ret = call fp128 (fp128) @pass_longdouble(fp128 f0x4000921FB5444261E0FC1518450562CD)
ret fp128 %ret
}
@@ -131,7 +131,7 @@ entry:
; CHECK: lxr 4,5
define i64 @call_floats0(fp128 %arg0, double %arg1) {
entry:
- %ret = call i64 (fp128, fp128, double) @pass_floats0(fp128 0xLE0FC1518450562CD4000921FB5444261, fp128 %arg0, double %arg1)
+ %ret = call i64 (fp128, fp128, double) @pass_floats0(fp128 f0x4000921FB5444261E0FC1518450562CD, fp128 %arg0, double %arg1)
ret i64 %ret
}
@@ -169,7 +169,7 @@ entry:
; CHECK: axbr 0,1
define fp128 @pass_longdouble(fp128 %arg) {
entry:
- %X = fadd fp128 %arg, 0xL10000000000000004000921FB53C8D4F
+ %X = fadd fp128 %arg, f0x4000921FB53C8D4F1000000000000000
ret fp128 %X
}
@@ -182,7 +182,7 @@ define i64 @pass_floats0(fp128 %arg0, fp128 %arg1, double %arg2) {
%X = fadd fp128 %arg0, %arg1
%arg2_ext = fpext double %arg2 to fp128
%Y = fadd fp128 %X, %arg2_ext
- %ret_bool = fcmp ueq fp128 %Y, 0xLE0FC1518450562CD4000921FB5444261
+ %ret_bool = fcmp ueq fp128 %Y, f0x4000921FB5444261E0FC1518450562CD
%ret = sext i1 %ret_bool to i64
ret i64 %ret
}
diff --git a/llvm/test/CodeGen/SystemZ/call-zos-vararg.ll b/llvm/test/CodeGen/SystemZ/call-zos-vararg.ll
index 72f4d79610e0e4..6bbc36f9d70989 100644
--- a/llvm/test/CodeGen/SystemZ/call-zos-vararg.ll
+++ b/llvm/test/CodeGen/SystemZ/call-zos-vararg.ll
@@ -127,7 +127,7 @@ define i64 @call_vararg_both0(i64 %arg0, double %arg1) {
; CHECK-NEXT: b 2(7)
define i64 @call_vararg_long_double0() {
entry:
- %retval = call i64 (i64, i64, ...) @pass_vararg0(i64 1, i64 2, fp128 0xLE0FC1518450562CD4000921FB5444261)
+ %retval = call i64 (i64, i64, ...) @pass_vararg0(i64 1, i64 2, fp128 f0x4000921FB5444261E0FC1518450562CD)
ret i64 %retval
}
@@ -211,7 +211,7 @@ define void @call_vec_vararg_test0(<2 x double> %v) {
; ARCH12: vst 25,2208(4),3
; ARCH12: vst 24,2192(4),3
define void @call_vec_vararg_test1(<4 x i32> %v, <2 x i64> %w) {
- %retval = call i64(fp128, ...) @pass_vararg1(fp128 0xLE0FC1518450562CD4000921FB5444261, <4 x i32> %v, <2 x i64> %w)
+ %retval = call i64(fp128, ...) @pass_vararg1(fp128 f0x4000921FB5444261E0FC1518450562CD, <4 x i32> %v, <2 x i64> %w)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-cmp-03.ll b/llvm/test/CodeGen/SystemZ/fp-cmp-03.ll
index b645a15060960d..9415d70e9e9ff9 100644
--- a/llvm/test/CodeGen/SystemZ/fp-cmp-03.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-cmp-03.ll
@@ -30,7 +30,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) {
; CHECK: lgr %r2, %r3
; CHECK: br %r14
%f = load fp128, ptr %ptr
- %cond = fcmp oeq fp128 %f, 0xL00000000000000000000000000000000
+ %cond = fcmp oeq fp128 %f, f0x00000000000000000000000000000000
%res = select i1 %cond, i64 %a, i64 %b
ret i64 %res
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-cmp-04.ll b/llvm/test/CodeGen/SystemZ/fp-cmp-04.ll
index c1773abe92305d..9410f58ec2107a 100644
--- a/llvm/test/CodeGen/SystemZ/fp-cmp-04.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-cmp-04.ll
@@ -293,7 +293,7 @@ entry:
store fp128 %div, ptr %ptr1
%mul = fmul fp128 %val1, %val2
store fp128 %mul, ptr %ptr2
- %cmp = fcmp olt fp128 %val1, 0xL00000000000000000000000000000000
+ %cmp = fcmp olt fp128 %val1, f0x00000000000000000000000000000000
br i1 %cmp, label %exit, label %store
store:
diff --git a/llvm/test/CodeGen/SystemZ/fp-cmp-06.ll b/llvm/test/CodeGen/SystemZ/fp-cmp-06.ll
index 784ad72076b064..7acab83e147032 100644
--- a/llvm/test/CodeGen/SystemZ/fp-cmp-06.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-cmp-06.ll
@@ -27,7 +27,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) {
; CHECK-NEXT: locgrne %r2, %r3
; CHECK: br %r14
%f = load fp128, ptr %ptr
- %cond = fcmp oeq fp128 %f, 0xL00000000000000000000000000000000
+ %cond = fcmp oeq fp128 %f, f0x00000000000000000000000000000000
%res = select i1 %cond, i64 %a, i64 %b
ret i64 %res
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-cmp-zero.ll b/llvm/test/CodeGen/SystemZ/fp-cmp-zero.ll
index 01318f3cf119a8..d7dc05966b1fc4 100644
--- a/llvm/test/CodeGen/SystemZ/fp-cmp-zero.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-cmp-zero.ll
@@ -79,7 +79,7 @@ define i64 @f4m(i64 %a, i64 %b, double %V) {
define i64 @f5(i64 %a, i64 %b, fp128 %V, ptr %dst) {
; CHECK-LABEL: f5:
; CHECK: ltxbr %f1, %f0
- %cond = fcmp oeq fp128 %V, 0xL00000000000000008000000000000000
+ %cond = fcmp oeq fp128 %V, f0x80000000000000000000000000000000
%res = select i1 %cond, i64 %a, i64 %b
store volatile fp128 %V, ptr %dst
ret i64 %res
@@ -88,7 +88,7 @@ define i64 @f5(i64 %a, i64 %b, fp128 %V, ptr %dst) {
define i64 @f6(i64 %a, i64 %b, fp128 %V) {
; CHECK-LABEL: f6:
; CHECK: ltxbr %f0, %f0
- %cond = fcmp oeq fp128 %V, 0xL00000000000000008000000000000000
+ %cond = fcmp oeq fp128 %V, f0x80000000000000000000000000000000
%res = select i1 %cond, i64 %a, i64 %b
ret i64 %res
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-01.ll b/llvm/test/CodeGen/SystemZ/fp-const-01.ll
index fe0e63df4ae6a0..8d35bfa4944eb6 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-01.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-01.ll
@@ -25,6 +25,6 @@ define void @f3(ptr %x) {
; CHECK: std %f0, 0(%r2)
; CHECK: std %f2, 8(%r2)
; CHECK: br %r14
- store fp128 0xL00000000000000000000000000000000, ptr %x
+ store fp128 f0x00000000000000000000000000000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-02.ll b/llvm/test/CodeGen/SystemZ/fp-const-02.ll
index fd83413d738389..cf750611a56394 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-02.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-02.ll
@@ -26,6 +26,6 @@ define void @f3(ptr %x) {
; CHECK: lzxr [[REGISTER:%f[0-5]+]]
; CHECK: lcxbr %f0, [[REGISTER]]
; CHECK: br %r14
- store fp128 0xL00000000000000008000000000000000, ptr %x
+ store fp128 f0x80000000000000000000000000000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-05.ll b/llvm/test/CodeGen/SystemZ/fp-const-05.ll
index 63d2033742615d..6fc28be33c5af8 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-05.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-05.ll
@@ -13,6 +13,6 @@ define void @f1(ptr %x) {
; CHECK: br %r14
;
; CONST: .long 0x3f800001
- store fp128 0xL00000000000000003fff000002000000, ptr %x
+ store fp128 f0x3fff0000020000000000000000000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-07.ll b/llvm/test/CodeGen/SystemZ/fp-const-07.ll
index f99fa9b71fdf20..bf667426dd9b39 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-07.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-07.ll
@@ -13,6 +13,6 @@ define void @f1(ptr %x) {
; CHECK: br %r14
;
; CONST: .quad 0x3ff0000010000000
- store fp128 0xL00000000000000003fff000001000000, ptr %x
+ store fp128 f0x3fff0000010000000000000000000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-08.ll b/llvm/test/CodeGen/SystemZ/fp-const-08.ll
index e146cf9e27664c..fa95a8f9d0a79e 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-08.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-08.ll
@@ -16,6 +16,6 @@ define void @f1(ptr %x) {
;
; CONST: .quad 0x3fff000000000000
; CONST: .quad 0x0800000000000000
- store fp128 0xL08000000000000003fff000000000000, ptr %x
+ store fp128 f0x3fff0000000000000800000000000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-09.ll b/llvm/test/CodeGen/SystemZ/fp-const-09.ll
index a3b4cd6d8ee77c..ed3e403f9f3039 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-09.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-09.ll
@@ -15,6 +15,6 @@ define void @f1(ptr %x) {
;
; CONST: .quad 0x3fff000000000000
; CONST: .quad 0x0000000000000001
- store fp128 0xL00000000000000013fff000000000000, ptr %x
+ store fp128 f0x3fff0000000000000000000000000001, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-const-11.ll b/llvm/test/CodeGen/SystemZ/fp-const-11.ll
index f64129d71fedee..7c457ced852aec 100644
--- a/llvm/test/CodeGen/SystemZ/fp-const-11.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-const-11.ll
@@ -9,7 +9,7 @@ define void @f1(ptr %x) {
; CHECK: vzero [[REG:%v[0-9]+]]
; CHECK: vst [[REG]], 0(%r2)
; CHECK: br %r14
- store fp128 0xL00000000000000000000000000000000, ptr %x
+ store fp128 f0x00000000000000000000000000000000, ptr %x
ret void
}
@@ -20,7 +20,7 @@ define void @f2(ptr %x) {
; CHECK: wflnxb [[REG]], [[REG]]
; CHECK: vst [[REG]], 0(%r2)
; CHECK: br %r14
- store fp128 0xL00000000000000008000000000000000, ptr %x
+ store fp128 f0x80000000000000000000000000000000, ptr %x
ret void
}
@@ -35,7 +35,7 @@ define void @f3(ptr %x) {
; CHECK: br %r14
; CONST: .quad 0x3fff000002000000
; CONST: .quad 0x0
- store fp128 0xL00000000000000003fff000002000000, ptr %x
+ store fp128 f0x3fff0000020000000000000000000000, ptr %x
ret void
}
@@ -45,7 +45,7 @@ define void @f4(ptr %x) {
; CHECK: vgbm %v0, 21845
; CHECK-NEXT: vst %v0, 0(%r2)
; CHECK-NEXT: br %r14
- store fp128 0xL00ff00ff00ff00ff00ff00ff00ff00ff, ptr %x
+ store fp128 f0x00ff00ff00ff00ff00ff00ff00ff00ff, ptr %x
ret void
}
@@ -55,7 +55,7 @@ define void @f5(ptr %x) {
; CHECK: vrepib %v0, -8
; CHECK-NEXT: vst %v0, 0(%r2)
; CHECK-NEXT: br %r14
- store fp128 0xLf8f8f8f8f8f8f8f8f8f8f8f8f8f8f8f8, ptr %x
+ store fp128 f0xf8f8f8f8f8f8f8f8f8f8f8f8f8f8f8f8, ptr %x
ret void
}
@@ -65,6 +65,6 @@ define void @f6(ptr %x) {
; CHECK: vgmg %v0, 12, 31
; CHECK-NEXT: vst %v0, 0(%r2)
; CHECK-NEXT: br %r14
- store fp128 0xL000fffff00000000000fffff00000000, ptr %x
+ store fp128 f0x000fffff00000000000fffff00000000, ptr %x
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-mul-12.ll b/llvm/test/CodeGen/SystemZ/fp-mul-12.ll
index dcc5ae622dcb66..c6fcba3815a4c8 100644
--- a/llvm/test/CodeGen/SystemZ/fp-mul-12.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-mul-12.ll
@@ -29,7 +29,7 @@ define void @f2(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) {
%f1 = load fp128, ptr %ptr1
%f2 = load fp128, ptr %ptr2
%f3 = load fp128, ptr %ptr3
- %neg = fsub fp128 0xL00000000000000008000000000000000, %f3
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %f3
%res = call fp128 @llvm.fma.f128 (fp128 %f1, fp128 %f2, fp128 %neg)
store fp128 %res, ptr %dst
ret void
@@ -47,7 +47,7 @@ define void @f3(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) {
%f2 = load fp128, ptr %ptr2
%f3 = load fp128, ptr %ptr3
%res = call fp128 @llvm.fma.f128 (fp128 %f1, fp128 %f2, fp128 %f3)
- %negres = fsub fp128 0xL00000000000000008000000000000000, %res
+ %negres = fsub fp128 f0x80000000000000000000000000000000, %res
store fp128 %negres, ptr %dst
ret void
}
@@ -63,9 +63,9 @@ define void @f4(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) {
%f1 = load fp128, ptr %ptr1
%f2 = load fp128, ptr %ptr2
%f3 = load fp128, ptr %ptr3
- %neg = fsub fp128 0xL00000000000000008000000000000000, %f3
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %f3
%res = call fp128 @llvm.fma.f128 (fp128 %f1, fp128 %f2, fp128 %neg)
- %negres = fsub fp128 0xL00000000000000008000000000000000, %res
+ %negres = fsub fp128 f0x80000000000000000000000000000000, %res
store fp128 %negres, ptr %dst
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-03.ll b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-03.ll
index 61919b5e7121e4..ade37a5f54bf95 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-03.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-03.ll
@@ -34,7 +34,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) #0 {
; CHECK: br %r14
%f = load fp128, ptr %ptr
%cond = call i1 @llvm.experimental.constrained.fcmp.f128(
- fp128 %f, fp128 0xL00000000000000000000000000000000,
+ fp128 %f, fp128 f0x00000000000000000000000000000000,
metadata !"oeq",
metadata !"fpexcept.strict") #0
%res = select i1 %cond, i64 %a, i64 %b
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-04.ll b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-04.ll
index bf9ccbcd70550e..a29a09e7cb14ec 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-04.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-04.ll
@@ -371,7 +371,7 @@ entry:
%mul = fmul fp128 %val1, %val2
store fp128 %mul, ptr %ptr2
%cmp = call i1 @llvm.experimental.constrained.fcmp.f128(
- fp128 %val1, fp128 0xL00000000000000000000000000000000,
+ fp128 %val1, fp128 f0x00000000000000000000000000000000,
metadata !"olt",
metadata !"fpexcept.strict") #0
br i1 %cmp, label %exit, label %store
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-06.ll b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-06.ll
index d927ccbae2e3c0..938c681e4f602b 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-cmp-06.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-cmp-06.ll
@@ -31,7 +31,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) #0 {
; CHECK: br %r14
%f = load fp128, ptr %ptr
%cond = call i1 @llvm.experimental.constrained.fcmp.f128(
- fp128 %f, fp128 0xL00000000000000000000000000000000,
+ fp128 %f, fp128 f0x00000000000000000000000000000000,
metadata !"oeq",
metadata !"fpexcept.strict") #0
%res = select i1 %cond, i64 %a, i64 %b
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-cmps-03.ll b/llvm/test/CodeGen/SystemZ/fp-strict-cmps-03.ll
index d759311ff6551a..4e8d4b9c4b6bf3 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-cmps-03.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-cmps-03.ll
@@ -35,7 +35,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) #0 {
; CHECK: br %r14
%f = load fp128, ptr %ptr
%cond = call i1 @llvm.experimental.constrained.fcmps.f128(
- fp128 %f, fp128 0xL00000000000000000000000000000000,
+ fp128 %f, fp128 f0x00000000000000000000000000000000,
metadata !"oeq",
metadata !"fpexcept.strict") #0
%res = select i1 %cond, i64 %a, i64 %b
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-cmps-06.ll b/llvm/test/CodeGen/SystemZ/fp-strict-cmps-06.ll
index 73afc69008e5a9..ebc4bb5227b081 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-cmps-06.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-cmps-06.ll
@@ -31,7 +31,7 @@ define i64 @f2(i64 %a, i64 %b, ptr %ptr) #0 {
; CHECK: br %r14
%f = load fp128, ptr %ptr
%cond = call i1 @llvm.experimental.constrained.fcmps.f128(
- fp128 %f, fp128 0xL00000000000000000000000000000000,
+ fp128 %f, fp128 f0x00000000000000000000000000000000,
metadata !"oeq",
metadata !"fpexcept.strict") #0
%res = select i1 %cond, i64 %a, i64 %b
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-mul-04.ll b/llvm/test/CodeGen/SystemZ/fp-strict-mul-04.ll
index 732762e1ea6bc7..d240e82eca4991 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-mul-04.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-mul-04.ll
@@ -250,7 +250,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra0 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul0, fp128 0xL00000000000000003fff000001000000,
+ fp128 %mul0, fp128 f0x3fff0000010000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc0 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -269,7 +269,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra1 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul1, fp128 0xL00000000000000003fff000002000000,
+ fp128 %mul1, fp128 f0x3fff0000020000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc1 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -288,7 +288,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra2 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul2, fp128 0xL00000000000000003fff000003000000,
+ fp128 %mul2, fp128 f0x3fff0000030000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc2 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -307,7 +307,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra3 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul3, fp128 0xL00000000000000003fff000004000000,
+ fp128 %mul3, fp128 f0x3fff0000040000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc3 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -326,7 +326,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra4 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul4, fp128 0xL00000000000000003fff000005000000,
+ fp128 %mul4, fp128 f0x3fff0000050000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc4 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -345,7 +345,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra5 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul5, fp128 0xL00000000000000003fff000006000000,
+ fp128 %mul5, fp128 f0x3fff0000060000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc5 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -364,7 +364,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra6 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul6, fp128 0xL00000000000000003fff000007000000,
+ fp128 %mul6, fp128 f0x3fff0000070000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc6 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -383,7 +383,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra7 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul7, fp128 0xL00000000000000003fff000008000000,
+ fp128 %mul7, fp128 f0x3fff0000080000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc7 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -402,7 +402,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra8 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul8, fp128 0xL00000000000000003fff000009000000,
+ fp128 %mul8, fp128 f0x3fff0000090000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc8 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
@@ -421,7 +421,7 @@ define double @f7(ptr %ptr0) #0 {
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%extra9 = call fp128 @llvm.experimental.constrained.fmul.f128(
- fp128 %mul9, fp128 0xL00000000000000003fff00000a000000,
+ fp128 %mul9, fp128 f0x3fff00000a0000000000000000000000,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
%trunc9 = call double @llvm.experimental.constrained.fptrunc.f64.f128(
diff --git a/llvm/test/CodeGen/SystemZ/fp-strict-mul-12.ll b/llvm/test/CodeGen/SystemZ/fp-strict-mul-12.ll
index 0fb5dfcbde61fc..9021867f3f9381 100644
--- a/llvm/test/CodeGen/SystemZ/fp-strict-mul-12.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-strict-mul-12.ll
@@ -32,7 +32,7 @@ define void @f2(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) #0 {
%f1 = load fp128, ptr %ptr1
%f2 = load fp128, ptr %ptr2
%f3 = load fp128, ptr %ptr3
- %neg = fsub fp128 0xL00000000000000008000000000000000, %f3
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %f3
%res = call fp128 @llvm.experimental.constrained.fma.f128 (
fp128 %f1, fp128 %f2, fp128 %neg,
metadata !"round.dynamic",
@@ -56,7 +56,7 @@ define void @f3(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) #0 {
fp128 %f1, fp128 %f2, fp128 %f3,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
- %negres = fsub fp128 0xL00000000000000008000000000000000, %res
+ %negres = fsub fp128 f0x80000000000000000000000000000000, %res
store fp128 %negres, ptr %dst
ret void
}
@@ -72,12 +72,12 @@ define void @f4(ptr %ptr1, ptr %ptr2, ptr %ptr3, ptr %dst) #0 {
%f1 = load fp128, ptr %ptr1
%f2 = load fp128, ptr %ptr2
%f3 = load fp128, ptr %ptr3
- %neg = fsub fp128 0xL00000000000000008000000000000000, %f3
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %f3
%res = call fp128 @llvm.experimental.constrained.fma.f128 (
fp128 %f1, fp128 %f2, fp128 %neg,
metadata !"round.dynamic",
metadata !"fpexcept.strict") #0
- %negres = fsub fp128 0xL00000000000000008000000000000000, %res
+ %negres = fsub fp128 f0x80000000000000000000000000000000, %res
store fp128 %negres, ptr %dst
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/loop-03.ll b/llvm/test/CodeGen/SystemZ/loop-03.ll
index f62e7f193528f4..d6f36ad5e09d23 100644
--- a/llvm/test/CodeGen/SystemZ/loop-03.ll
+++ b/llvm/test/CodeGen/SystemZ/loop-03.ll
@@ -109,18 +109,18 @@ define void @fun1(ptr) {
br i1 undef, label %7, label %2
; <label>:2: ; preds = %2, %1
- %3 = phi fp128 [ %5, %2 ], [ 0xL00000000000000000000000000000000, %1 ]
- %4 = tail call fp128 @llvm.pow.f128(fp128 0xL00000000000000000000000000000000, fp128 0xL00000000000000000000000000000000) #2
+ %3 = phi fp128 [ %5, %2 ], [ f0x00000000000000000000000000000000, %1 ]
+ %4 = tail call fp128 @llvm.pow.f128(fp128 f0x00000000000000000000000000000000, fp128 f0x00000000000000000000000000000000) #2
%5 = fadd fp128 %3, %4
%6 = icmp eq i64 undef, 0
br i1 %6, label %7, label %2
; <label>:7: ; preds = %2, %1
- %8 = phi fp128 [ 0xL00000000000000000000000000000000, %1 ], [ %5, %2 ]
- %9 = fadd fp128 0xL00000000000000000000000000000000, %8
- %10 = fadd fp128 0xL00000000000000000000000000000000, %9
- %11 = fadd fp128 0xL00000000000000000000000000000000, %10
- %12 = tail call fp128 @llvm.pow.f128(fp128 %11, fp128 0xL00000000000000000000000000000000) #2
+ %8 = phi fp128 [ f0x00000000000000000000000000000000, %1 ], [ %5, %2 ]
+ %9 = fadd fp128 f0x00000000000000000000000000000000, %8
+ %10 = fadd fp128 f0x00000000000000000000000000000000, %9
+ %11 = fadd fp128 f0x00000000000000000000000000000000, %10
+ %12 = tail call fp128 @llvm.pow.f128(fp128 %11, fp128 f0x00000000000000000000000000000000) #2
store fp128 %12, ptr %0, align 8
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/soft-float-args.ll b/llvm/test/CodeGen/SystemZ/soft-float-args.ll
index 06b362672b1f6d..bc0449d567b314 100644
--- a/llvm/test/CodeGen/SystemZ/soft-float-args.ll
+++ b/llvm/test/CodeGen/SystemZ/soft-float-args.ll
@@ -49,7 +49,7 @@ define fp128 @f2_fp128(fp128 %arg) {
; CHECK-NEXT: lg %r3, 200(%r15)
; CHECK-NEXT: lmg %r14, %r15, 320(%r15)
; CHECK-NEXT: br %r14
- %res = fadd fp128 %arg, 0xL00000000000000004001400000000000
+ %res = fadd fp128 %arg, f0x40014000000000000000000000000000
ret fp128 %res
}
diff --git a/llvm/test/CodeGen/SystemZ/tdc-03.ll b/llvm/test/CodeGen/SystemZ/tdc-03.ll
index 95708f1effc6bd..496f0434313491 100644
--- a/llvm/test/CodeGen/SystemZ/tdc-03.ll
+++ b/llvm/test/CodeGen/SystemZ/tdc-03.ll
@@ -113,7 +113,7 @@ define i32 @f10(fp128 %x) {
; CHECK-LABEL: f10
; CHECK: tcxb %f0, 3279
%y = call fp128 @llvm.fabs.f128(fp128 %x)
- %res = fcmp ult fp128 %y, 0xL00000000000000000001000000000000
+ %res = fcmp ult fp128 %y, f0x00010000000000000000000000000000
%xres = zext i1 %res to i32
ret i32 %xres
}
diff --git a/llvm/test/CodeGen/SystemZ/vec-args-08.ll b/llvm/test/CodeGen/SystemZ/vec-args-08.ll
index 96ef7db06849a7..9950549f812464 100644
--- a/llvm/test/CodeGen/SystemZ/vec-args-08.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-args-08.ll
@@ -74,7 +74,7 @@ define <1 x fp128> @f6() {
; CHECK-NEXT: lzxr %f0
; CHECK-NEXT: vmrhg %v24, %v0, %v2
; CHECK-NEXT: br %r14
- ret <1 x fp128><fp128 0xL00000000000000000000000000000000>
+ ret <1 x fp128><fp128 f0x00000000000000000000000000000000>
}
declare void @bar7(<1 x fp128>)
@@ -92,7 +92,7 @@ define void @f7() {
; CHECK-NEXT: brasl %r14, bar7 at PLT
; CHECK-NEXT: lmg %r14, %r15, 272(%r15)
; CHECK-NEXT: br %r14
- call void @bar7 (<1 x fp128> <fp128 0xL00000000000000000000000000000000>)
+ call void @bar7 (<1 x fp128> <fp128 f0x00000000000000000000000000000000>)
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/vec-max-05.ll b/llvm/test/CodeGen/SystemZ/vec-max-05.ll
index 7bdf4e06029d2a..9040eb3bdbbd72 100644
--- a/llvm/test/CodeGen/SystemZ/vec-max-05.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-max-05.ll
@@ -210,8 +210,8 @@ define void @f24(ptr %ptr, ptr %dst) {
; CHECK: vst [[RES]], 0(%r3)
; CHECK: br %r14
%val = load fp128, ptr %ptr
- %cmp = fcmp ogt fp128 %val, 0xL00000000000000000000000000000000
- %res = select i1 %cmp, fp128 %val, fp128 0xL00000000000000000000000000000000
+ %cmp = fcmp ogt fp128 %val, f0x00000000000000000000000000000000
+ %res = select i1 %cmp, fp128 %val, fp128 f0x00000000000000000000000000000000
store fp128 %res, ptr %dst
ret void
}
@@ -225,8 +225,8 @@ define void @f25(ptr %ptr, ptr %dst) {
; CHECK: vst [[RES]], 0(%r3)
; CHECK: br %r14
%val = load fp128, ptr %ptr
- %cmp = fcmp ugt fp128 %val, 0xL00000000000000000000000000000000
- %res = select i1 %cmp, fp128 %val, fp128 0xL00000000000000000000000000000000
+ %cmp = fcmp ugt fp128 %val, f0x00000000000000000000000000000000
+ %res = select i1 %cmp, fp128 %val, fp128 f0x00000000000000000000000000000000
store fp128 %res, ptr %dst
ret void
}
diff --git a/llvm/test/CodeGen/SystemZ/vec-min-05.ll b/llvm/test/CodeGen/SystemZ/vec-min-05.ll
index bf27eb3e56036c..96e56d4aae3b9c 100644
--- a/llvm/test/CodeGen/SystemZ/vec-min-05.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-min-05.ll
@@ -210,8 +210,8 @@ define void @f24(ptr %ptr, ptr %dst) {
; CHECK: vst [[RES]], 0(%r3)
; CHECK: br %r14
%val = load fp128, ptr %ptr
- %cmp = fcmp olt fp128 %val, 0xL00000000000000000000000000000000
- %res = select i1 %cmp, fp128 %val, fp128 0xL00000000000000000000000000000000
+ %cmp = fcmp olt fp128 %val, f0x00000000000000000000000000000000
+ %res = select i1 %cmp, fp128 %val, fp128 f0x00000000000000000000000000000000
store fp128 %res, ptr %dst
ret void
}
@@ -225,8 +225,8 @@ define void @f25(ptr %ptr, ptr %dst) {
; CHECK: vst [[RES]], 0(%r3)
; CHECK: br %r14
%val = load fp128, ptr %ptr
- %cmp = fcmp ult fp128 %val, 0xL00000000000000000000000000000000
- %res = select i1 %cmp, fp128 %val, fp128 0xL00000000000000000000000000000000
+ %cmp = fcmp ult fp128 %val, f0x00000000000000000000000000000000
+ %res = select i1 %cmp, fp128 %val, fp128 f0x00000000000000000000000000000000
store fp128 %res, ptr %dst
ret void
}
diff --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/exitcount.ll b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/exitcount.ll
index 3c1510623e5c43..e4bef84e2ebd7a 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/exitcount.ll
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/exitcount.ll
@@ -69,7 +69,7 @@ do.body6: ; preds = %do.body6, %do.end
%8 = tail call <8 x i1> @llvm.arm.mve.vctp16(i32 %blkCnt.1)
%9 = tail call <8 x i16> @llvm.masked.load.v8i16.p0(ptr %px.0, i32 2, <8 x i1> %8, <8 x i16> zeroinitializer)
%10 = tail call fast <8 x half> @llvm.arm.mve.vcvt.fp.int.predicated.v8f16.v8i16.v8i1(<8 x i16> %9, i32 0, <8 x i1> %8, <8 x half> undef)
- %11 = tail call fast <8 x half> @llvm.arm.mve.mul.predicated.v8f16.v8i1(<8 x half> %10, <8 x half> <half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800>, <8 x i1> %8, <8 x half> undef)
+ %11 = tail call fast <8 x half> @llvm.arm.mve.mul.predicated.v8f16.v8i1(<8 x half> %10, <8 x half> <half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800>, <8 x i1> %8, <8 x half> undef)
tail call void @llvm.masked.store.v8f16.p0(<8 x half> %11, ptr %pframef16.1, i32 2, <8 x i1> %8)
%add.ptr8 = getelementptr inbounds i16, ptr %px.0, i32 8
%add.ptr9 = getelementptr inbounds half, ptr %pframef16.1, i32 8
diff --git a/llvm/test/CodeGen/Thumb2/bf16-instructions.ll b/llvm/test/CodeGen/Thumb2/bf16-instructions.ll
index 786e35517fd7c6..58400040fc6029 100644
--- a/llvm/test/CodeGen/Thumb2/bf16-instructions.ll
+++ b/llvm/test/CodeGen/Thumb2/bf16-instructions.ll
@@ -985,10 +985,10 @@ define void @test_fccmp(bfloat %in, ptr %out) {
; CHECK-FP-NEXT: .long 0x45000000 @ float 2048
; CHECK-FP-NEXT: .LCPI34_1:
; CHECK-FP-NEXT: .long 0x48000000 @ float 131072
- %cmp1 = fcmp ogt bfloat %in, 0xR4800
- %cmp2 = fcmp olt bfloat %in, 0xR4500
+ %cmp1 = fcmp ogt bfloat %in, f0x4800
+ %cmp2 = fcmp olt bfloat %in, f0x4500
%cond = and i1 %cmp1, %cmp2
- %result = select i1 %cond, bfloat %in, bfloat 0xR4500
+ %result = select i1 %cond, bfloat %in, bfloat f0x4500
store bfloat %result, ptr %out
ret void
}
diff --git a/llvm/test/CodeGen/Thumb2/mve-float16regloops.ll b/llvm/test/CodeGen/Thumb2/mve-float16regloops.ll
index c8dd949ca9d882..621a5941570370 100644
--- a/llvm/test/CodeGen/Thumb2/mve-float16regloops.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-float16regloops.ll
@@ -1450,7 +1450,7 @@ do.body: ; preds = %if.end, %entry
%i6 = load <8 x half>, ptr %add.ptr, align 2
%add.ptr2 = getelementptr inbounds half, ptr %pCurCoeffs.0, i32 5
%i8 = load <8 x half>, ptr %pState.0, align 2
- %i9 = shufflevector <8 x half> %i8, <8 x half> <half poison, half poison, half 0xH0000, half 0xH0000, half poison, half poison, half poison, half poison>, <8 x i32> <i32 0, i32 1, i32 10, i32 11, i32 4, i32 5, i32 6, i32 7>
+ %i9 = shufflevector <8 x half> %i8, <8 x half> <half poison, half poison, half f0x0000, half f0x0000, half poison, half poison, half poison, half poison>, <8 x i32> <i32 0, i32 1, i32 10, i32 11, i32 4, i32 5, i32 6, i32 7>
%i10 = bitcast <8 x half> %i4 to <8 x i16>
%i11 = tail call { i32, <8 x i16> } @llvm.arm.mve.vshlc.v8i16(<8 x i16> %i10, i32 0, i32 16)
%i12 = extractvalue { i32, <8 x i16> } %i11, 0
@@ -1477,7 +1477,7 @@ while.body: ; preds = %while.body, %do.bod
%i22 = extractelement <8 x half> %i21, i32 0
%.splat6 = shufflevector <8 x half> %i21, <8 x half> poison, <8 x i32> zeroinitializer
%i23 = tail call fast <8 x half> @llvm.fma.v8f16(<8 x half> %i6, <8 x half> %.splat6, <8 x half> %i21)
- %i24 = insertelement <8 x half> %i23, half 0xH0000, i32 3
+ %i24 = insertelement <8 x half> %i23, half f0x0000, i32 3
%.splatinsert7 = insertelement <8 x half> poison, half %i20, i32 0
%.splat8 = shufflevector <8 x half> %.splatinsert7, <8 x half> poison, <8 x i32> zeroinitializer
%i25 = tail call fast <8 x half> @llvm.fma.v8f16(<8 x half> %i14, <8 x half> %.splat8, <8 x half> %i24)
@@ -1485,7 +1485,7 @@ while.body: ; preds = %while.body, %do.bod
%.splat10 = shufflevector <8 x half> %i25, <8 x half> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1>
%i27 = tail call fast <8 x half> @llvm.fma.v8f16(<8 x half> %i18, <8 x half> %.splat10, <8 x half> %i25)
%i28 = shufflevector <8 x half> %i27, <8 x half> undef, <8 x i32> <i32 2, i32 undef, i32 undef, i32 3, i32 4, i32 5, i32 6, i32 7>
- %i29 = insertelement <8 x half> %i28, half 0xH0000, i32 2
+ %i29 = insertelement <8 x half> %i28, half f0x0000, i32 2
%i30 = shufflevector <8 x half> %i29, <8 x half> %i27, <8 x i32> <i32 0, i32 11, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
%incdec.ptr11 = getelementptr inbounds half, ptr %pOut.192, i32 1
store half %i22, ptr %pOut.192, align 2
diff --git a/llvm/test/CodeGen/Thumb2/mve-pred-selectop3.ll b/llvm/test/CodeGen/Thumb2/mve-pred-selectop3.ll
index 080c6c1a1efdc8..6e4da4d344ec1f 100644
--- a/llvm/test/CodeGen/Thumb2/mve-pred-selectop3.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-pred-selectop3.ll
@@ -528,7 +528,7 @@ define arm_aapcs_vfpcc <8 x half> @fadd_v8f16_x(<8 x half> %x, <8 x half> %y, i3
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = fadd <8 x half> %a, %x
ret <8 x half> %b
}
@@ -614,7 +614,7 @@ define arm_aapcs_vfpcc <8 x half> @fmul_v8f16_x(<8 x half> %x, <8 x half> %y, i3
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>
+ %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>
%b = fmul <8 x half> %a, %x
ret <8 x half> %b
}
@@ -666,7 +666,7 @@ define arm_aapcs_vfpcc <8 x half> @fdiv_v8f16_x(<8 x half> %x, <8 x half> %y, i3
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>
+ %a = select <8 x i1> %c, <8 x half> %y, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>
%b = fdiv <8 x half> %x, %a
ret <8 x half> %b
}
@@ -700,7 +700,7 @@ define arm_aapcs_vfpcc <8 x half> @fmai_v8f16_x(<8 x half> %x, <8 x half> %y, <8
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = call <8 x half> @llvm.fma.v8f16(<8 x half> %y, <8 x half> %z, <8 x half> %a)
ret <8 x half> %b
}
@@ -730,7 +730,7 @@ define arm_aapcs_vfpcc <8 x half> @fma_v8f16_x(<8 x half> %x, <8 x half> %y, <8
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
%m = fmul fast <8 x half> %y, %z
- %a = select <8 x i1> %c, <8 x half> %m, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %m, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = fadd fast <8 x half> %a, %x
ret <8 x half> %b
}
@@ -1304,7 +1304,7 @@ entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
%i = insertelement <8 x half> undef, half %y, i64 0
%ys = shufflevector <8 x half> %i, <8 x half> undef, <8 x i32> zeroinitializer
- %a = select <8 x i1> %c, <8 x half> %ys, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %ys, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = fadd <8 x half> %a, %x
ret <8 x half> %b
}
@@ -1372,7 +1372,7 @@ entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
%i = insertelement <8 x half> undef, half %y, i64 0
%ys = shufflevector <8 x half> %i, <8 x half> undef, <8 x i32> zeroinitializer
- %a = select <8 x i1> %c, <8 x half> %ys, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>
+ %a = select <8 x i1> %c, <8 x half> %ys, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>
%b = fmul <8 x half> %a, %x
ret <8 x half> %b
}
@@ -2101,7 +2101,7 @@ define arm_aapcs_vfpcc <8 x half> @fadd_v8f16_y(<8 x half> %x, <8 x half> %y, i3
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = fadd <8 x half> %a, %y
ret <8 x half> %b
}
@@ -2161,7 +2161,7 @@ define arm_aapcs_vfpcc <8 x half> @fmul_v8f16_y(<8 x half> %x, <8 x half> %y, i3
; CHECK-NEXT: bx lr
entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
- %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>
+ %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>
%b = fmul <8 x half> %a, %y
ret <8 x half> %b
}
@@ -2909,7 +2909,7 @@ entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
%i = insertelement <8 x half> undef, half %y, i64 0
%ys = shufflevector <8 x half> %i, <8 x half> undef, <8 x i32> zeroinitializer
- %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000, half 0xH8000>
+ %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000, half f0x8000>
%b = fadd <8 x half> %ys, %a
ret <8 x half> %b
}
@@ -2985,7 +2985,7 @@ entry:
%c = call <8 x i1> @llvm.arm.mve.vctp16(i32 %n)
%i = insertelement <8 x half> undef, half %y, i64 0
%ys = shufflevector <8 x half> %i, <8 x half> undef, <8 x i32> zeroinitializer
- %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>
+ %a = select <8 x i1> %c, <8 x half> %x, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>
%b = fmul <8 x half> %ys, %a
ret <8 x half> %b
}
diff --git a/llvm/test/CodeGen/Thumb2/mve-vcvt-fixed-to-float.ll b/llvm/test/CodeGen/Thumb2/mve-vcvt-fixed-to-float.ll
index 38a2cfc1a579d7..0ff542f5a77361 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vcvt-fixed-to-float.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-vcvt-fixed-to-float.ll
@@ -339,7 +339,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_1(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #1
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>
+ %3 = fmul ninf <8 x half> %2, <half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800>
ret <8 x half> %3
}
@@ -349,7 +349,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_2(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #2
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400>
+ %3 = fmul ninf <8 x half> %2, <half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400>
ret <8 x half> %3
}
@@ -359,7 +359,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_3(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #3
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000>
+ %3 = fmul ninf <8 x half> %2, <half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000>
ret <8 x half> %3
}
@@ -369,7 +369,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_4(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #4
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00>
ret <8 x half> %3
}
@@ -379,7 +379,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_5(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #5
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800>
+ %3 = fmul ninf <8 x half> %2, <half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800>
ret <8 x half> %3
}
@@ -389,7 +389,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_6(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #6
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400>
+ %3 = fmul ninf <8 x half> %2, <half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400>
ret <8 x half> %3
}
@@ -399,7 +399,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_7(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #7
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000>
+ %3 = fmul ninf <8 x half> %2, <half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000>
ret <8 x half> %3
}
@@ -409,7 +409,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_8(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #8
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00>
ret <8 x half> %3
}
@@ -419,7 +419,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_9(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #9
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800>
+ %3 = fmul ninf <8 x half> %2, <half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800>
ret <8 x half> %3
}
@@ -429,7 +429,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_10(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #10
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400>
+ %3 = fmul ninf <8 x half> %2, <half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400>
ret <8 x half> %3
}
@@ -439,7 +439,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_11(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #11
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000>
+ %3 = fmul ninf <8 x half> %2, <half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000>
ret <8 x half> %3
}
@@ -449,7 +449,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_12(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #12
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00>
ret <8 x half> %3
}
@@ -459,7 +459,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_13(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #13
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800>
+ %3 = fmul ninf <8 x half> %2, <half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800>
ret <8 x half> %3
}
@@ -469,7 +469,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_14(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #14
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400>
+ %3 = fmul ninf <8 x half> %2, <half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400>
ret <8 x half> %3
}
@@ -481,7 +481,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_i16_15(<8 x i16> %0) {
; CHECK-NEXT: vmul.f16 q0, q0, q1
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200>
+ %3 = fmul ninf <8 x half> %2, <half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200>
ret <8 x half> %3
}
@@ -825,7 +825,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_1(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #1
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800, half 0xH3800>
+ %3 = fmul ninf <8 x half> %2, <half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800, half f0x3800>
ret <8 x half> %3
}
@@ -835,7 +835,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_2(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #2
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400, half 0xH3400>
+ %3 = fmul ninf <8 x half> %2, <half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400, half f0x3400>
ret <8 x half> %3
}
@@ -845,7 +845,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_3(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #3
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000, half 0xH3000>
+ %3 = fmul ninf <8 x half> %2, <half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000, half f0x3000>
ret <8 x half> %3
}
@@ -855,7 +855,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_4(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #4
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00, half 0xH2C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00, half f0x2C00>
ret <8 x half> %3
}
@@ -865,7 +865,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_5(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #5
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800, half 0xH2800>
+ %3 = fmul ninf <8 x half> %2, <half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800, half f0x2800>
ret <8 x half> %3
}
@@ -875,7 +875,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_6(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #6
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400, half 0xH2400>
+ %3 = fmul ninf <8 x half> %2, <half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400, half f0x2400>
ret <8 x half> %3
}
@@ -885,7 +885,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_7(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #7
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000, half 0xH2000>
+ %3 = fmul ninf <8 x half> %2, <half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000, half f0x2000>
ret <8 x half> %3
}
@@ -895,7 +895,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_8(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #8
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00, half 0xH1C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00, half f0x1C00>
ret <8 x half> %3
}
@@ -905,7 +905,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_9(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #9
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800, half 0xH1800>
+ %3 = fmul ninf <8 x half> %2, <half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800, half f0x1800>
ret <8 x half> %3
}
@@ -915,7 +915,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_10(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #10
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400, half 0xH1400>
+ %3 = fmul ninf <8 x half> %2, <half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400, half f0x1400>
ret <8 x half> %3
}
@@ -925,7 +925,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_11(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #11
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000, half 0xH1000>
+ %3 = fmul ninf <8 x half> %2, <half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000, half f0x1000>
ret <8 x half> %3
}
@@ -935,7 +935,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_12(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #12
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00, half 0xH0C00>
+ %3 = fmul ninf <8 x half> %2, <half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00, half f0x0C00>
ret <8 x half> %3
}
@@ -945,7 +945,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_13(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #13
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800, half 0xH0800>
+ %3 = fmul ninf <8 x half> %2, <half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800, half f0x0800>
ret <8 x half> %3
}
@@ -955,7 +955,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_14(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.u16 q0, q0, #14
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400>
+ %3 = fmul ninf <8 x half> %2, <half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400>
ret <8 x half> %3
}
@@ -967,7 +967,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_15(<8 x i16> %0) {
; CHECK-NEXT: vmul.f16 q0, q0, q1
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul ninf <8 x half> %2, <half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200, half 0xH0200>
+ %3 = fmul ninf <8 x half> %2, <half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200, half f0x0200>
ret <8 x half> %3
}
@@ -979,7 +979,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_u16_inf(<8 x i16> %0) {
; CHECK-NEXT: vmul.f16 q0, q0, q1
; CHECK-NEXT: bx lr
%2 = uitofp <8 x i16> %0 to <8 x half>
- %3 = fmul <8 x half> %2, <half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400>
+ %3 = fmul <8 x half> %2, <half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400>
ret <8 x half> %3
}
@@ -989,7 +989,7 @@ define arm_aapcs_vfpcc <8 x half> @vcvt_s16_inf(<8 x i16> %0) {
; CHECK-NEXT: vcvt.f16.s16 q0, q0, #14
; CHECK-NEXT: bx lr
%2 = sitofp <8 x i16> %0 to <8 x half>
- %3 = fmul <8 x half> %2, <half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400, half 0xH0400>
+ %3 = fmul <8 x half> %2, <half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400, half f0x0400>
ret <8 x half> %3
}
diff --git a/llvm/test/CodeGen/Thumb2/mve-vcvt-float-to-fixed.ll b/llvm/test/CodeGen/Thumb2/mve-vcvt-float-to-fixed.ll
index 083a1bc0e3db8e..1bdb7966dbd30c 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vcvt-float-to-fixed.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-vcvt-float-to-fixed.ll
@@ -338,7 +338,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_1(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #1
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000>
+ %2 = fmul fast <8 x half> %0, <half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -348,7 +348,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_2(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #2
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400>
+ %2 = fmul fast <8 x half> %0, <half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -358,7 +358,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_3(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #3
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800>
+ %2 = fmul fast <8 x half> %0, <half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -368,7 +368,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_4(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #4
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00>
+ %2 = fmul fast <8 x half> %0, <half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -378,7 +378,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_5(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #5
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000>
+ %2 = fmul fast <8 x half> %0, <half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -388,7 +388,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_6(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #6
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400>
+ %2 = fmul fast <8 x half> %0, <half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -398,7 +398,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_7(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #7
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800>
+ %2 = fmul fast <8 x half> %0, <half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -408,7 +408,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_8(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #8
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00>
+ %2 = fmul fast <8 x half> %0, <half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -418,7 +418,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_9(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #9
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000>
+ %2 = fmul fast <8 x half> %0, <half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -428,7 +428,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_10(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #10
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400>
+ %2 = fmul fast <8 x half> %0, <half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -438,7 +438,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_11(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #11
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800>
+ %2 = fmul fast <8 x half> %0, <half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -448,7 +448,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_12(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #12
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00>
+ %2 = fmul fast <8 x half> %0, <half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -458,7 +458,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_13(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #13
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>
+ %2 = fmul fast <8 x half> %0, <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -468,7 +468,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_14(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #14
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400>
+ %2 = fmul fast <8 x half> %0, <half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -478,7 +478,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_i16_15(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #15
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800>
+ %2 = fmul fast <8 x half> %0, <half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -820,7 +820,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_1(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #1
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000>
+ %2 = fmul fast <8 x half> %0, <half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -830,7 +830,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_2(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #2
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400, half 0xH4400>
+ %2 = fmul fast <8 x half> %0, <half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400, half f0x4400>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -840,7 +840,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_3(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #3
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800, half 0xH4800>
+ %2 = fmul fast <8 x half> %0, <half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800, half f0x4800>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -850,7 +850,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_4(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #4
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00, half 0xH4C00>
+ %2 = fmul fast <8 x half> %0, <half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00, half f0x4C00>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -860,7 +860,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_5(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #5
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000, half 0xH5000>
+ %2 = fmul fast <8 x half> %0, <half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000, half f0x5000>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -870,7 +870,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_6(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #6
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400>
+ %2 = fmul fast <8 x half> %0, <half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -880,7 +880,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_7(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #7
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800>
+ %2 = fmul fast <8 x half> %0, <half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -890,7 +890,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_8(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #8
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00, half 0xH5C00>
+ %2 = fmul fast <8 x half> %0, <half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00, half f0x5C00>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -900,7 +900,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_9(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #9
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000, half 0xH6000>
+ %2 = fmul fast <8 x half> %0, <half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000, half f0x6000>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -910,7 +910,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_10(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #10
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400, half 0xH6400>
+ %2 = fmul fast <8 x half> %0, <half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400, half f0x6400>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -920,7 +920,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_11(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #11
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800, half 0xH6800>
+ %2 = fmul fast <8 x half> %0, <half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800, half f0x6800>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -930,7 +930,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_12(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #12
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00, half 0xH6C00>
+ %2 = fmul fast <8 x half> %0, <half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00, half f0x6C00>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -940,7 +940,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_13(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #13
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>
+ %2 = fmul fast <8 x half> %0, <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -950,7 +950,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_14(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #14
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400, half 0xH7400>
+ %2 = fmul fast <8 x half> %0, <half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400, half f0x7400>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -960,7 +960,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_15(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #15
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800>
+ %2 = fmul fast <8 x half> %0, <half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -972,7 +972,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_u16_inf(<8 x half> %0) {
; CHECK-NEXT: vmul.f16 q0, q0, q1
; CHECK-NEXT: vcvt.u16.f16 q0, q0
; CHECK-NEXT: bx lr
- %2 = fmul <8 x half> %0, <half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800>
+ %2 = fmul <8 x half> %0, <half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800>
%3 = fptoui <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -982,7 +982,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_s16_inf(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #15
; CHECK-NEXT: bx lr
- %2 = fmul <8 x half> %0, <half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800, half 0xH7800>
+ %2 = fmul <8 x half> %0, <half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800, half f0x7800>
%3 = fptosi <8 x half> %2 to <8 x i16>
ret <8 x i16> %3
}
@@ -1032,7 +1032,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_sat_s16_1(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #1
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000>
+ %2 = fmul fast <8 x half> %0, <half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000>
%3 = call <8 x i16> @llvm.fptosi.sat.v8i16.v8f16(<8 x half> %2)
ret <8 x i16> %3
}
@@ -1042,7 +1042,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_sat_u16_1(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #1
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000, half 0xH4000>
+ %2 = fmul fast <8 x half> %0, <half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000, half f0x4000>
%3 = call <8 x i16> @llvm.fptoui.sat.v8i16.v8f16(<8 x half> %2)
ret <8 x i16> %3
}
@@ -1052,7 +1052,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_sat_s16_6(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.s16.f16 q0, q0, #6
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400, half 0xH5400>
+ %2 = fmul fast <8 x half> %0, <half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400, half f0x5400>
%3 = call <8 x i16> @llvm.fptosi.sat.v8i16.v8f16(<8 x half> %2)
ret <8 x i16> %3
}
@@ -1062,7 +1062,7 @@ define arm_aapcs_vfpcc <8 x i16> @vcvt_sat_u16_7(<8 x half> %0) {
; CHECK: @ %bb.0:
; CHECK-NEXT: vcvt.u16.f16 q0, q0, #7
; CHECK-NEXT: bx lr
- %2 = fmul fast <8 x half> %0, <half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800, half 0xH5800>
+ %2 = fmul fast <8 x half> %0, <half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800, half f0x5800>
%3 = call <8 x i16> @llvm.fptoui.sat.v8i16.v8f16(<8 x half> %2)
ret <8 x i16> %3
}
diff --git a/llvm/test/CodeGen/VE/Scalar/br_cc.ll b/llvm/test/CodeGen/VE/Scalar/br_cc.ll
index 34d2c891fd7b09..340a396ad6667d 100644
--- a/llvm/test/CodeGen/VE/Scalar/br_cc.ll
+++ b/llvm/test/CodeGen/VE/Scalar/br_cc.ll
@@ -637,7 +637,7 @@ define void @br_cc_quad_imm(fp128 %0) {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: .LBB27_2:
; CHECK-NEXT: b.l.t (, %s10)
- %2 = fcmp fast olt fp128 %0, 0xL00000000000000000000000000000000
+ %2 = fcmp fast olt fp128 %0, f0x00000000000000000000000000000000
br i1 %2, label %3, label %4
3: ; preds = %1
@@ -971,7 +971,7 @@ define void @br_cc_imm_quad(fp128 %0) {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: .LBB41_2:
; CHECK-NEXT: b.l.t (, %s10)
- %2 = fcmp fast ult fp128 %0, 0xL00000000000000000000000000000000
+ %2 = fcmp fast ult fp128 %0, f0x00000000000000000000000000000000
br i1 %2, label %4, label %3
3: ; preds = %1
diff --git a/llvm/test/CodeGen/VE/Scalar/fabs.ll b/llvm/test/CodeGen/VE/Scalar/fabs.ll
index a68e561d0098f0..b9e18eeea03550 100644
--- a/llvm/test/CodeGen/VE/Scalar/fabs.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fabs.ll
@@ -103,7 +103,7 @@ define fp128 @fabs_quad_zero() {
; CHECK-NEXT: ld %s0, 8(, %s2)
; CHECK-NEXT: ld %s1, (, %s2)
; CHECK-NEXT: b.l.t (, %s10)
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
; Function Attrs: norecurse nounwind readnone
@@ -134,6 +134,6 @@ define fp128 @fabs_quad_const() {
; CHECK-NEXT: ld %s0, 8(, %s2)
; CHECK-NEXT: ld %s1, (, %s2)
; CHECK-NEXT: b.l.t (, %s10)
- %1 = tail call fast fp128 @llvm.fabs.f128(fp128 0xL0000000000000000C000000000000000)
+ %1 = tail call fast fp128 @llvm.fabs.f128(fp128 f0xC0000000000000000000000000000000)
ret fp128 %1
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fcopysign.ll b/llvm/test/CodeGen/VE/Scalar/fcopysign.ll
index 552f6ed59f599a..6d93dc2d7b6607 100644
--- a/llvm/test/CodeGen/VE/Scalar/fcopysign.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fcopysign.ll
@@ -137,7 +137,7 @@ define fp128 @copysign_quad_zero(fp128 %0) {
; CHECK-NEXT: ld %s0, 8(, %s11)
; CHECK-NEXT: adds.l %s11, 32, %s11
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.copysign.f128(fp128 0xL00000000000000000000000000000000, fp128 %0)
+ %2 = tail call fast fp128 @llvm.copysign.f128(fp128 f0x00000000000000000000000000000000, fp128 %0)
ret fp128 %2
}
@@ -193,6 +193,6 @@ define fp128 @copysign_quad_const(fp128 %0) {
; CHECK-NEXT: ld %s0, 8(, %s11)
; CHECK-NEXT: adds.l %s11, 32, %s11
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.copysign.f128(fp128 0xL0000000000000000C000000000000000, fp128 %0)
+ %2 = tail call fast fp128 @llvm.copysign.f128(fp128 f0xC0000000000000000000000000000000, fp128 %0)
ret fp128 %2
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fcos.ll b/llvm/test/CodeGen/VE/Scalar/fcos.ll
index b5428679ddd457..2612990323d91c 100644
--- a/llvm/test/CodeGen/VE/Scalar/fcos.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fcos.ll
@@ -109,7 +109,7 @@ define fp128 @fcos_quad_zero() {
; CHECK-NEXT: lea.sl %s12, cosl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.cos.f128(fp128 0xL00000000000000000000000000000000)
+ %1 = tail call fast fp128 @llvm.cos.f128(fp128 f0x00000000000000000000000000000000)
ret fp128 %1
}
@@ -146,6 +146,6 @@ define fp128 @fcos_quad_const() {
; CHECK-NEXT: lea.sl %s12, cosl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.cos.f128(fp128 0xL0000000000000000C000000000000000)
+ %1 = tail call fast fp128 @llvm.cos.f128(fp128 f0xC0000000000000000000000000000000)
ret fp128 %1
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fma.ll b/llvm/test/CodeGen/VE/Scalar/fma.ll
index 81976ce8580496..516104abe9f444 100644
--- a/llvm/test/CodeGen/VE/Scalar/fma.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fma.ll
@@ -181,7 +181,7 @@ define fp128 @fma_quad_fore_const(fp128 noundef %0, fp128 noundef %1) {
; CHECK-NEXT: lea.sl %s12, fmal at hi(, %s6)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %3 = tail call fast fp128 @llvm.fma.f128(fp128 %0, fp128 0xL0000000000000000C000000000000000, fp128 %1)
+ %3 = tail call fast fp128 @llvm.fma.f128(fp128 %0, fp128 f0xC0000000000000000000000000000000, fp128 %1)
ret fp128 %3
}
@@ -231,6 +231,6 @@ define fp128 @fma_quad_back_const(fp128 noundef %0, fp128 noundef %1) {
; CHECK-NEXT: lea.sl %s12, fmal at hi(, %s6)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %3 = tail call fast fp128 @llvm.fma.f128(fp128 %0, fp128 0xL0000000000000000C000000000000000, fp128 %1)
+ %3 = tail call fast fp128 @llvm.fma.f128(fp128 %0, fp128 f0xC0000000000000000000000000000000, fp128 %1)
ret fp128 %3
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fp_add.ll b/llvm/test/CodeGen/VE/Scalar/fp_add.ll
index fa93a329019240..0d1c0e3944cb74 100644
--- a/llvm/test/CodeGen/VE/Scalar/fp_add.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fp_add.ll
@@ -57,7 +57,7 @@ define fp128 @func6(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fadd.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fadd fp128 %a, 0xL00000000000000004001400000000000
+ %r = fadd fp128 %a, f0x40014000000000000000000000000000
ret fp128 %r
}
@@ -93,7 +93,7 @@ define fp128 @func9(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fadd.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fadd fp128 %a, 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF
+ %r = fadd fp128 %a, f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF
ret fp128 %r
}
@@ -125,6 +125,6 @@ define fp128 @faddq_imm(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fadd.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fadd fp128 %a, 0xLA0000000000000000000000000000000
+ %r = fadd fp128 %a, f0x0000000000000000A000000000000000
ret fp128 %r
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fp_div.ll b/llvm/test/CodeGen/VE/Scalar/fp_div.ll
index 15f19db1925279..3b171735cbb9bf 100644
--- a/llvm/test/CodeGen/VE/Scalar/fp_div.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fp_div.ll
@@ -63,7 +63,7 @@ define fp128 @func6(fp128 %a) {
; CHECK-NEXT: lea.sl %s12, __divtf3 at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %r = fdiv fp128 %a, 0xL00000000000000004001400000000000
+ %r = fdiv fp128 %a, f0x40014000000000000000000000000000
ret fp128 %r
}
@@ -103,6 +103,6 @@ define fp128 @func9(fp128 %a) {
; CHECK-NEXT: lea.sl %s12, __divtf3 at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %r = fdiv fp128 %a, 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF
+ %r = fdiv fp128 %a, f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF
ret fp128 %r
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fp_frem.ll b/llvm/test/CodeGen/VE/Scalar/fp_frem.ll
index 2b7ce9c395d06b..883472b35ff1a4 100644
--- a/llvm/test/CodeGen/VE/Scalar/fp_frem.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fp_frem.ll
@@ -116,7 +116,7 @@ define fp128 @frem_quad_zero(fp128 %0) {
; CHECK-NEXT: lea.sl %s12, fmodl at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %2 = frem fp128 0xL00000000000000000000000000000000, %0
+ %2 = frem fp128 f0x00000000000000000000000000000000, %0
ret fp128 %2
}
@@ -166,6 +166,6 @@ define fp128 @frem_quad_cont(fp128 %0) {
; CHECK-NEXT: lea.sl %s12, fmodl at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %2 = frem fp128 0xL0000000000000000C000000000000000, %0
+ %2 = frem fp128 f0xC0000000000000000000000000000000, %0
ret fp128 %2
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fp_mul.ll b/llvm/test/CodeGen/VE/Scalar/fp_mul.ll
index badddd10059046..9f60bfff533a32 100644
--- a/llvm/test/CodeGen/VE/Scalar/fp_mul.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fp_mul.ll
@@ -57,7 +57,7 @@ define fp128 @func6(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fmul.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fmul fp128 %a, 0xL00000000000000004001400000000000
+ %r = fmul fp128 %a, f0x40014000000000000000000000000000
ret fp128 %r
}
@@ -93,7 +93,7 @@ define fp128 @func9(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fmul.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fmul fp128 %a, 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF
+ %r = fmul fp128 %a, f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF
ret fp128 %r
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fp_sub.ll b/llvm/test/CodeGen/VE/Scalar/fp_sub.ll
index 6aa1a32e9bfa45..c571feea6bae3d 100644
--- a/llvm/test/CodeGen/VE/Scalar/fp_sub.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fp_sub.ll
@@ -57,7 +57,7 @@ define fp128 @func6(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fadd.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fadd fp128 %a, 0xL0000000000000000C001400000000000
+ %r = fadd fp128 %a, f0xC0014000000000000000000000000000
ret fp128 %r
}
@@ -93,7 +93,7 @@ define fp128 @func9(fp128 %a) {
; CHECK-NEXT: ld %s5, (, %s2)
; CHECK-NEXT: fadd.q %s0, %s0, %s4
; CHECK-NEXT: b.l.t (, %s10)
- %r = fadd fp128 %a, 0xLFFFFFFFFFFFFFFFFFFFEFFFFFFFFFFFF
+ %r = fadd fp128 %a, f0xFFFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF
ret fp128 %r
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fsin.ll b/llvm/test/CodeGen/VE/Scalar/fsin.ll
index 995fdaa0cf1ed4..f88e2e07fe1c24 100644
--- a/llvm/test/CodeGen/VE/Scalar/fsin.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fsin.ll
@@ -109,7 +109,7 @@ define fp128 @fsin_quad_zero() {
; CHECK-NEXT: lea.sl %s12, sinl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.sin.f128(fp128 0xL00000000000000000000000000000000)
+ %1 = tail call fast fp128 @llvm.sin.f128(fp128 f0x00000000000000000000000000000000)
ret fp128 %1
}
@@ -147,6 +147,6 @@ define fp128 @fsin_quad_const() {
; CHECK-NEXT: lea.sl %s12, sinl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.sin.f128(fp128 0xL0000000000000000C000000000000000)
+ %1 = tail call fast fp128 @llvm.sin.f128(fp128 f0xC0000000000000000000000000000000)
ret fp128 %1
}
diff --git a/llvm/test/CodeGen/VE/Scalar/fsqrt.ll b/llvm/test/CodeGen/VE/Scalar/fsqrt.ll
index 3da8cff5f3950f..c9b06078de0c6b 100644
--- a/llvm/test/CodeGen/VE/Scalar/fsqrt.ll
+++ b/llvm/test/CodeGen/VE/Scalar/fsqrt.ll
@@ -110,7 +110,7 @@ define fp128 @fsqrt_quad_zero() {
; CHECK-NEXT: lea.sl %s12, sqrtl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.sqrt.f128(fp128 0xL00000000000000000000000000000000)
+ %1 = tail call fast fp128 @llvm.sqrt.f128(fp128 f0x00000000000000000000000000000000)
ret fp128 %1
}
@@ -146,6 +146,6 @@ define fp128 @fsqrt_quad_const() {
; CHECK-NEXT: lea.sl %s12, sqrtl at hi(, %s2)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %1 = tail call fast fp128 @llvm.sqrt.f128(fp128 0xL0000000000000000C000000000000000)
+ %1 = tail call fast fp128 @llvm.sqrt.f128(fp128 f0xC0000000000000000000000000000000)
ret fp128 %1
}
diff --git a/llvm/test/CodeGen/VE/Scalar/load_gv.ll b/llvm/test/CodeGen/VE/Scalar/load_gv.ll
index b4daad4663ddea..f92419d6fd353e 100644
--- a/llvm/test/CodeGen/VE/Scalar/load_gv.ll
+++ b/llvm/test/CodeGen/VE/Scalar/load_gv.ll
@@ -7,7 +7,7 @@
@vi128 = common dso_local local_unnamed_addr global i128 0, align 16
@vf32 = common dso_local local_unnamed_addr global float 0.000000e+00, align 4
@vf64 = common dso_local local_unnamed_addr global double 0.000000e+00, align 8
- at vf128 = common dso_local local_unnamed_addr global fp128 0xL00000000000000000000000000000000, align 16
+ at vf128 = common dso_local local_unnamed_addr global fp128 f0x00000000000000000000000000000000, align 16
; Function Attrs: norecurse nounwind readonly
define fp128 @loadf128com() {
diff --git a/llvm/test/CodeGen/VE/Scalar/maxnum.ll b/llvm/test/CodeGen/VE/Scalar/maxnum.ll
index b9a28573bce6fa..19837dc4b3aff5 100644
--- a/llvm/test/CodeGen/VE/Scalar/maxnum.ll
+++ b/llvm/test/CodeGen/VE/Scalar/maxnum.ll
@@ -118,7 +118,7 @@ define fp128 @func_fp_fmax_zero_quad(fp128 noundef %0) {
; CHECK-NEXT: or %s0, 0, %s2
; CHECK-NEXT: or %s1, 0, %s3
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.maxnum.f128(fp128 %0, fp128 0xL00000000000000000000000000000000)
+ %2 = tail call fast fp128 @llvm.maxnum.f128(fp128 %0, fp128 f0x00000000000000000000000000000000)
ret fp128 %2
}
@@ -157,6 +157,6 @@ define fp128 @func_fp_fmax_const_quad(fp128 noundef %0) {
; CHECK-NEXT: or %s0, 0, %s2
; CHECK-NEXT: or %s1, 0, %s3
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.maxnum.f128(fp128 %0, fp128 0xL0000000000000000C000000000000000)
+ %2 = tail call fast fp128 @llvm.maxnum.f128(fp128 %0, fp128 f0xC0000000000000000000000000000000)
ret fp128 %2
}
diff --git a/llvm/test/CodeGen/VE/Scalar/minnum.ll b/llvm/test/CodeGen/VE/Scalar/minnum.ll
index 3fb6b089a11032..529f58f63864eb 100644
--- a/llvm/test/CodeGen/VE/Scalar/minnum.ll
+++ b/llvm/test/CodeGen/VE/Scalar/minnum.ll
@@ -118,7 +118,7 @@ define fp128 @func_fp_fmin_zero_quad(fp128 noundef %0) {
; CHECK-NEXT: or %s0, 0, %s2
; CHECK-NEXT: or %s1, 0, %s3
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.minnum.f128(fp128 %0, fp128 0xL00000000000000000000000000000000)
+ %2 = tail call fast fp128 @llvm.minnum.f128(fp128 %0, fp128 f0x00000000000000000000000000000000)
ret fp128 %2
}
@@ -157,6 +157,6 @@ define fp128 @func_fp_fmin_const_quad(fp128 noundef %0) {
; CHECK-NEXT: or %s0, 0, %s2
; CHECK-NEXT: or %s1, 0, %s3
; CHECK-NEXT: b.l.t (, %s10)
- %2 = tail call fast fp128 @llvm.minnum.f128(fp128 %0, fp128 0xL0000000000000000C000000000000000)
+ %2 = tail call fast fp128 @llvm.minnum.f128(fp128 %0, fp128 f0xC0000000000000000000000000000000)
ret fp128 %2
}
diff --git a/llvm/test/CodeGen/VE/Scalar/pow.ll b/llvm/test/CodeGen/VE/Scalar/pow.ll
index 5b6cab6df71497..7040bc3f6e3fb6 100644
--- a/llvm/test/CodeGen/VE/Scalar/pow.ll
+++ b/llvm/test/CodeGen/VE/Scalar/pow.ll
@@ -100,7 +100,7 @@ define fp128 @func_fp_pow_zero_back_quad(fp128 noundef %0) {
; CHECK-NEXT: ld %s0, 8(, %s2)
; CHECK-NEXT: ld %s1, (, %s2)
; CHECK-NEXT: b.l.t (, %s10)
- ret fp128 0xL00000000000000003FFF000000000000
+ ret fp128 f0x3FFF0000000000000000000000000000
}
; Function Attrs: mustprogress nofree nosync nounwind readnone willreturn
@@ -149,7 +149,7 @@ define fp128 @func_fp_pow_zero_fore_quad(fp128 noundef %0) {
; CHECK-NEXT: lea.sl %s12, powl at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %2 = tail call fast fp128 @llvm.pow.f128(fp128 0xL00000000000000000000000000000000, fp128 %0)
+ %2 = tail call fast fp128 @llvm.pow.f128(fp128 f0x00000000000000000000000000000000, fp128 %0)
ret fp128 %2
}
@@ -242,7 +242,7 @@ define fp128 @func_fp_pow_const_fore_quad(fp128 noundef %0) {
; CHECK-NEXT: lea.sl %s12, powl at hi(, %s4)
; CHECK-NEXT: bsic %s10, (, %s12)
; CHECK-NEXT: or %s11, 0, %s9
- %2 = tail call fast fp128 @llvm.pow.f128(fp128 0xL0000000000000000C000000000000000, fp128 %0)
+ %2 = tail call fast fp128 @llvm.pow.f128(fp128 f0xC0000000000000000000000000000000, fp128 %0)
ret fp128 %2
}
diff --git a/llvm/test/CodeGen/VE/Scalar/select.ll b/llvm/test/CodeGen/VE/Scalar/select.ll
index 184513a3f820bb..c2d9ff82d846ae 100644
--- a/llvm/test/CodeGen/VE/Scalar/select.ll
+++ b/llvm/test/CodeGen/VE/Scalar/select.ll
@@ -358,7 +358,7 @@ define fp128 @select_quad_mimm(i1 zeroext %0, fp128 %1) {
; CHECK-NEXT: or %s0, 0, %s2
; CHECK-NEXT: or %s1, 0, %s3
; CHECK-NEXT: b.l.t (, %s10)
- %3 = select fast i1 %0, fp128 0xL0000000000000000C000000000000000, fp128 %1
+ %3 = select fast i1 %0, fp128 f0xC0000000000000000000000000000000, fp128 %1
ret fp128 %3
}
@@ -524,6 +524,6 @@ define fp128 @select_mimm_quad(i1 zeroext %0, fp128 %1) {
; CHECK-NEXT: or %s0, 0, %s4
; CHECK-NEXT: or %s1, 0, %s5
; CHECK-NEXT: b.l.t (, %s10)
- %3 = select fast i1 %0, fp128 %1, fp128 0xL0000000000000000C000000000000000
+ %3 = select fast i1 %0, fp128 %1, fp128 f0xC0000000000000000000000000000000
ret fp128 %3
}
diff --git a/llvm/test/CodeGen/VE/Scalar/store_gv.ll b/llvm/test/CodeGen/VE/Scalar/store_gv.ll
index 6f70b81a4915ef..cff5aaa2349d9d 100644
--- a/llvm/test/CodeGen/VE/Scalar/store_gv.ll
+++ b/llvm/test/CodeGen/VE/Scalar/store_gv.ll
@@ -7,7 +7,7 @@
@vi128 = common dso_local local_unnamed_addr global i128 0, align 16
@vf32 = common dso_local local_unnamed_addr global float 0.000000e+00, align 4
@vf64 = common dso_local local_unnamed_addr global double 0.000000e+00, align 8
- at vf128 = common dso_local local_unnamed_addr global fp128 0xL00000000000000000000000000000000, align 16
+ at vf128 = common dso_local local_unnamed_addr global fp128 f0x00000000000000000000000000000000, align 16
; Function Attrs: norecurse nounwind readonly
define void @storef128com(fp128 %0) {
diff --git a/llvm/test/CodeGen/WebAssembly/varargs.ll b/llvm/test/CodeGen/WebAssembly/varargs.ll
index 2944936192b8ba..b0f26bb636ea5e 100644
--- a/llvm/test/CodeGen/WebAssembly/varargs.ll
+++ b/llvm/test/CodeGen/WebAssembly/varargs.ll
@@ -146,7 +146,7 @@ declare void @callee_with_nonlegal_fixed(fp128, ...) nounwind
; CHECK: i32.const $push[[L2:[0-9]+]]=, 0
; CHECK: call callee_with_nonlegal_fixed, $pop[[L0]], $pop[[L1]], $pop[[L2]]{{$}}
define void @call_nonlegal_fixed() nounwind {
- call void (fp128, ...) @callee_with_nonlegal_fixed(fp128 0xL00000000000000000000000000000000)
+ call void (fp128, ...) @callee_with_nonlegal_fixed(fp128 f0x00000000000000000000000000000000)
ret void
}
@@ -197,7 +197,7 @@ define void @nonlegal_fixed(fp128 %x, ...) nounwind {
; UNKNOWN-NEXT: call callee, $1
define void @call_fp128_alignment(ptr %p) {
entry:
- call void (...) @callee(i8 7, fp128 0xL00000000000000018000000000000000)
+ call void (...) @callee(i8 7, fp128 f0x80000000000000000000000000000001)
ret void
}
diff --git a/llvm/test/CodeGen/X86/2008-01-16-FPStackifierAssert.ll b/llvm/test/CodeGen/X86/2008-01-16-FPStackifierAssert.ll
index de07b353e41d9d..c7a9162f44d04d 100644
--- a/llvm/test/CodeGen/X86/2008-01-16-FPStackifierAssert.ll
+++ b/llvm/test/CodeGen/X86/2008-01-16-FPStackifierAssert.ll
@@ -3,8 +3,8 @@
define void @SolveCubic(double %a, double %b, double %c, double %d, ptr %solutions, ptr %x) {
entry:
%tmp71 = load x86_fp80, ptr null, align 16 ; <x86_fp80> [#uses=1]
- %tmp72 = fdiv x86_fp80 %tmp71, 0xKC000C000000000000000 ; <x86_fp80> [#uses=1]
- %tmp73 = fadd x86_fp80 0xK00000000000000000000, %tmp72 ; <x86_fp80> [#uses=1]
+ %tmp72 = fdiv x86_fp80 %tmp71, f0xC000C000000000000000 ; <x86_fp80> [#uses=1]
+ %tmp73 = fadd x86_fp80 f0x00000000000000000000, %tmp72 ; <x86_fp80> [#uses=1]
%tmp7374 = fptrunc x86_fp80 %tmp73 to double ; <double> [#uses=1]
store double %tmp7374, ptr null, align 8
%tmp81 = load double, ptr null, align 8 ; <double> [#uses=1]
@@ -14,7 +14,7 @@ entry:
%tmp85 = fmul double 0.000000e+00, %tmp84 ; <double> [#uses=1]
%tmp8586 = fpext double %tmp85 to x86_fp80 ; <x86_fp80> [#uses=1]
%tmp87 = load x86_fp80, ptr null, align 16 ; <x86_fp80> [#uses=1]
- %tmp88 = fdiv x86_fp80 %tmp87, 0xKC000C000000000000000 ; <x86_fp80> [#uses=1]
+ %tmp88 = fdiv x86_fp80 %tmp87, f0xC000C000000000000000 ; <x86_fp80> [#uses=1]
%tmp89 = fadd x86_fp80 %tmp8586, %tmp88 ; <x86_fp80> [#uses=1]
%tmp8990 = fptrunc x86_fp80 %tmp89 to double ; <double> [#uses=1]
store double %tmp8990, ptr null, align 8
@@ -25,7 +25,7 @@ entry:
%tmp101 = fmul double 0.000000e+00, %tmp100 ; <double> [#uses=1]
%tmp101102 = fpext double %tmp101 to x86_fp80 ; <x86_fp80> [#uses=1]
%tmp103 = load x86_fp80, ptr null, align 16 ; <x86_fp80> [#uses=1]
- %tmp104 = fdiv x86_fp80 %tmp103, 0xKC000C000000000000000 ; <x86_fp80> [#uses=1]
+ %tmp104 = fdiv x86_fp80 %tmp103, f0xC000C000000000000000 ; <x86_fp80> [#uses=1]
%tmp105 = fadd x86_fp80 %tmp101102, %tmp104 ; <x86_fp80> [#uses=1]
%tmp105106 = fptrunc x86_fp80 %tmp105 to double ; <double> [#uses=1]
store double %tmp105106, ptr null, align 8
diff --git a/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-1.ll b/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-1.ll
index 59afc7bd9c9d78..048275533ec34a 100644
--- a/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-1.ll
+++ b/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-1.ll
@@ -21,6 +21,6 @@ define i32 @main() {
; CHECK-NEXT: addl $28, %esp
; CHECK-NEXT: retl
entry_nan.main:
- call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 0xK7FFFC001234000000800)
+ call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 f0x7FFFC001234000000800)
ret i32 0
}
diff --git a/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-2.ll b/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-2.ll
index aa54767fdedee0..09cbaf73c044ff 100644
--- a/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-2.ll
+++ b/llvm/test/CodeGen/X86/2008-10-06-x87ld-nan-2.ll
@@ -7,7 +7,7 @@ target triple = "i686-apple-darwin8"
declare x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 %f)
- at _D3nan4rvale = global x86_fp80 0xK7FFF8001234000000000 ; <ptr> [#uses=1]
+ at _D3nan4rvale = global x86_fp80 f0x7FFF8001234000000000 ; <ptr> [#uses=1]
define i32 @main() {
; CHECK-LABEL: main:
@@ -32,7 +32,7 @@ define i32 @main() {
entry_nan.main:
%tmp = load x86_fp80, ptr @_D3nan4rvale ; <x86_fp80> [#uses=1]
call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 %tmp)
- call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 0xK7FFF8001234000000000)
- call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 0xK7FFFC001234000000400)
+ call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 f0x7FFF8001234000000000)
+ call x86_stdcallcc void @_D3nan5printFeZv(x86_fp80 f0x7FFFC001234000000400)
ret i32 0
}
diff --git a/llvm/test/CodeGen/X86/2009-02-12-SpillerBug.ll b/llvm/test/CodeGen/X86/2009-02-12-SpillerBug.ll
index 275c5430f7c61f..3541c4c50845cb 100644
--- a/llvm/test/CodeGen/X86/2009-02-12-SpillerBug.ll
+++ b/llvm/test/CodeGen/X86/2009-02-12-SpillerBug.ll
@@ -4,19 +4,19 @@
define hidden void @__mulxc3(ptr noalias nocapture sret({ x86_fp80, x86_fp80 }) %agg.result, x86_fp80 %a, x86_fp80 %b, x86_fp80 %c, x86_fp80 %d) nounwind {
entry:
%0 = fmul x86_fp80 %b, %d ; <x86_fp80> [#uses=1]
- %1 = fsub x86_fp80 0xK00000000000000000000, %0 ; <x86_fp80> [#uses=1]
- %2 = fadd x86_fp80 0xK00000000000000000000, 0xK00000000000000000000 ; <x86_fp80> [#uses=1]
- %3 = fcmp uno x86_fp80 %1, 0xK00000000000000000000 ; <i1> [#uses=1]
- %4 = fcmp uno x86_fp80 %2, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %1 = fsub x86_fp80 f0x00000000000000000000, %0 ; <x86_fp80> [#uses=1]
+ %2 = fadd x86_fp80 f0x00000000000000000000, f0x00000000000000000000 ; <x86_fp80> [#uses=1]
+ %3 = fcmp uno x86_fp80 %1, f0x00000000000000000000 ; <i1> [#uses=1]
+ %4 = fcmp uno x86_fp80 %2, f0x00000000000000000000 ; <i1> [#uses=1]
%or.cond = and i1 %3, %4 ; <i1> [#uses=1]
br i1 %or.cond, label %bb47, label %bb71
bb47: ; preds = %entry
- %5 = fcmp uno x86_fp80 %a, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %5 = fcmp uno x86_fp80 %a, f0x00000000000000000000 ; <i1> [#uses=1]
br i1 %5, label %bb60, label %bb62
bb60: ; preds = %bb47
- %6 = tail call x86_fp80 @copysignl(x86_fp80 0xK00000000000000000000, x86_fp80 %a) nounwind readnone ; <x86_fp80> [#uses=0]
+ %6 = tail call x86_fp80 @copysignl(x86_fp80 f0x00000000000000000000, x86_fp80 %a) nounwind readnone ; <x86_fp80> [#uses=0]
br label %bb62
bb62: ; preds = %bb60, %bb47
diff --git a/llvm/test/CodeGen/X86/2009-03-03-BitcastLongDouble.ll b/llvm/test/CodeGen/X86/2009-03-03-BitcastLongDouble.ll
index 3dff4f7bfc9f6c..208ae93fabbcbe 100644
--- a/llvm/test/CodeGen/X86/2009-03-03-BitcastLongDouble.ll
+++ b/llvm/test/CodeGen/X86/2009-03-03-BitcastLongDouble.ll
@@ -6,7 +6,7 @@ define i32 @x(i32 %y) nounwind readnone {
entry:
%tmp14 = zext i32 %y to i80 ; <i80> [#uses=1]
%tmp15 = bitcast i80 %tmp14 to x86_fp80 ; <x86_fp80> [#uses=1]
- %add = fadd x86_fp80 %tmp15, 0xK3FFF8000000000000000 ; <x86_fp80> [#uses=1]
+ %add = fadd x86_fp80 %tmp15, f0x3FFF8000000000000000 ; <x86_fp80> [#uses=1]
%tmp11 = bitcast x86_fp80 %add to i80 ; <i80> [#uses=1]
%tmp10 = trunc i80 %tmp11 to i32 ; <i32> [#uses=1]
ret i32 %tmp10
diff --git a/llvm/test/CodeGen/X86/2009-03-09-SpillerBug.ll b/llvm/test/CodeGen/X86/2009-03-09-SpillerBug.ll
index 1b94ecdc48e1f8..d12bbda3c7972a 100644
--- a/llvm/test/CodeGen/X86/2009-03-09-SpillerBug.ll
+++ b/llvm/test/CodeGen/X86/2009-03-09-SpillerBug.ll
@@ -4,9 +4,9 @@
define void @__mulxc3(x86_fp80 %b) nounwind {
entry:
%call = call x86_fp80 @y(ptr null, ptr null) ; <x86_fp80> [#uses=0]
- %cmp = fcmp ord x86_fp80 %b, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %cmp = fcmp ord x86_fp80 %b, f0x00000000000000000000 ; <i1> [#uses=1]
%sub = fsub x86_fp80 %b, %b ; <x86_fp80> [#uses=1]
- %cmp7 = fcmp uno x86_fp80 %sub, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %cmp7 = fcmp uno x86_fp80 %sub, f0x00000000000000000000 ; <i1> [#uses=1]
%and12 = and i1 %cmp7, %cmp ; <i1> [#uses=1]
%and = zext i1 %and12 to i32 ; <i32> [#uses=1]
%conv9 = sitofp i32 %and to x86_fp80 ; <x86_fp80> [#uses=1]
diff --git a/llvm/test/CodeGen/X86/2009-03-12-CPAlignBug.ll b/llvm/test/CodeGen/X86/2009-03-12-CPAlignBug.ll
index 9952d864bb9891..37c755b3fca4a9 100644
--- a/llvm/test/CodeGen/X86/2009-03-12-CPAlignBug.ll
+++ b/llvm/test/CodeGen/X86/2009-03-12-CPAlignBug.ll
@@ -25,11 +25,11 @@ bb1: ; preds = %newFuncRoot
%6 = fdiv x86_fp80 %.reload5, %5 ; <x86_fp80> [#uses=1]
%7 = fadd x86_fp80 %5, %6 ; <x86_fp80> [#uses=1]
%8 = fptrunc x86_fp80 %7 to double ; <double> [#uses=1]
- %9 = fcmp olt x86_fp80 %.reload6, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %9 = fcmp olt x86_fp80 %.reload6, f0x00000000000000000000 ; <i1> [#uses=1]
%iftmp.6.0 = select i1 %9, double 1.000000e+00, double -1.000000e+00 ; <double> [#uses=1]
%10 = fmul double %8, %iftmp.6.0 ; <double> [#uses=1]
%11 = fpext double %10 to x86_fp80 ; <x86_fp80> [#uses=1]
- %12 = fdiv x86_fp80 %.reload, 0xKC000C000000000000000 ; <x86_fp80> [#uses=1]
+ %12 = fdiv x86_fp80 %.reload, f0xC000C000000000000000 ; <x86_fp80> [#uses=1]
%13 = fadd x86_fp80 %11, %12 ; <x86_fp80> [#uses=1]
%14 = fptrunc x86_fp80 %13 to double ; <double> [#uses=1]
store double %14, ptr %x, align 1
diff --git a/llvm/test/CodeGen/X86/2010-05-07-ldconvert.ll b/llvm/test/CodeGen/X86/2010-05-07-ldconvert.ll
index 81e37d57c41a83..d5a5a5a269dd2b 100644
--- a/llvm/test/CodeGen/X86/2010-05-07-ldconvert.ll
+++ b/llvm/test/CodeGen/X86/2010-05-07-ldconvert.ll
@@ -6,7 +6,7 @@ entry:
%retval = alloca i32, align 4 ; <ptr> [#uses=2]
%r = alloca i32, align 4 ; <ptr> [#uses=2]
store i32 0, ptr %retval
- %tmp = call x86_fp80 @llvm.powi.f80.i32(x86_fp80 0xK3FFF8000000000000000, i32 -64) ; <x86_fp80> [#uses=1]
+ %tmp = call x86_fp80 @llvm.powi.f80.i32(x86_fp80 f0x3FFF8000000000000000, i32 -64) ; <x86_fp80> [#uses=1]
%conv = fptosi x86_fp80 %tmp to i32 ; <i32> [#uses=1]
store i32 %conv, ptr %r
%tmp1 = load i32, ptr %r ; <i32> [#uses=1]
diff --git a/llvm/test/CodeGen/X86/2010-05-12-FastAllocKills.ll b/llvm/test/CodeGen/X86/2010-05-12-FastAllocKills.ll
index 1a1f6617d930f1..9dc0f800fc092f 100644
--- a/llvm/test/CodeGen/X86/2010-05-12-FastAllocKills.ll
+++ b/llvm/test/CodeGen/X86/2010-05-12-FastAllocKills.ll
@@ -40,12 +40,12 @@ isdigit339.exit11.preheader: ; preds = %bb2
br i1 undef, label %bb12, label %bb10
bb10: ; preds = %bb10, %isdigit339.exit11.preheader
- %divisor.041 = phi x86_fp80 [ %0, %bb10 ], [ 0xK3FFF8000000000000000, %isdigit339.exit11.preheader ] ; <x86_fp80> [#uses=1]
- %0 = fmul x86_fp80 %divisor.041, 0xK4002A000000000000000 ; <x86_fp80> [#uses=2]
+ %divisor.041 = phi x86_fp80 [ %0, %bb10 ], [ f0x3FFF8000000000000000, %isdigit339.exit11.preheader ] ; <x86_fp80> [#uses=1]
+ %0 = fmul x86_fp80 %divisor.041, f0x4002A000000000000000 ; <x86_fp80> [#uses=2]
br i1 false, label %bb12, label %bb10
bb12: ; preds = %bb10, %isdigit339.exit11.preheader
- %divisor.0.lcssa = phi x86_fp80 [ 0xK3FFF8000000000000000, %isdigit339.exit11.preheader ], [ %0, %bb10 ] ; <x86_fp80> [#uses=0]
+ %divisor.0.lcssa = phi x86_fp80 [ f0x3FFF8000000000000000, %isdigit339.exit11.preheader ], [ %0, %bb10 ] ; <x86_fp80> [#uses=0]
br label %bb13
bb13: ; preds = %bb12, %bb2
diff --git a/llvm/test/CodeGen/X86/GlobalISel/regbankselect-x87.ll b/llvm/test/CodeGen/X86/GlobalISel/regbankselect-x87.ll
index 99d458a183a9bd..8956d8209fa01a 100644
--- a/llvm/test/CodeGen/X86/GlobalISel/regbankselect-x87.ll
+++ b/llvm/test/CodeGen/X86/GlobalISel/regbankselect-x87.ll
@@ -7,7 +7,7 @@ define x86_fp80 @f0(x86_fp80 noundef %a) {
; X86: bb.1.entry:
; X86-NEXT: [[FRAME_INDEX:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %fixed-stack.0
; X86-NEXT: [[LOAD:%[0-9]+]]:psr(s80) = G_LOAD [[FRAME_INDEX]](p0) :: (invariant load (s80) from %fixed-stack.0, align 4)
- ; X86-NEXT: [[C:%[0-9]+]]:psr(s80) = G_FCONSTANT x86_fp80 0xK400A8000000000000000
+ ; X86-NEXT: [[C:%[0-9]+]]:psr(s80) = G_FCONSTANT x86_fp80 f0x400A8000000000000000
; X86-NEXT: [[FRAME_INDEX1:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %stack.0.a.addr
; X86-NEXT: [[FRAME_INDEX2:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %stack.1.x
; X86-NEXT: G_STORE [[LOAD]](s80), [[FRAME_INDEX1]](p0) :: (store (s80) into %ir.a.addr, align 16)
@@ -22,7 +22,7 @@ define x86_fp80 @f0(x86_fp80 noundef %a) {
; X64: bb.1.entry:
; X64-NEXT: [[FRAME_INDEX:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %fixed-stack.0
; X64-NEXT: [[LOAD:%[0-9]+]]:psr(s80) = G_LOAD [[FRAME_INDEX]](p0) :: (invariant load (s80) from %fixed-stack.0, align 16)
- ; X64-NEXT: [[C:%[0-9]+]]:psr(s80) = G_FCONSTANT x86_fp80 0xK400A8000000000000000
+ ; X64-NEXT: [[C:%[0-9]+]]:psr(s80) = G_FCONSTANT x86_fp80 f0x400A8000000000000000
; X64-NEXT: [[FRAME_INDEX1:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %stack.0.a.addr
; X64-NEXT: [[FRAME_INDEX2:%[0-9]+]]:gpr(p0) = G_FRAME_INDEX %stack.1.x
; X64-NEXT: G_STORE [[LOAD]](s80), [[FRAME_INDEX1]](p0) :: (store (s80) into %ir.a.addr, align 16)
@@ -36,7 +36,7 @@ entry:
%a.addr = alloca x86_fp80, align 16
%x = alloca x86_fp80, align 16
store x86_fp80 %a, ptr %a.addr, align 16
- store x86_fp80 0xK400A8000000000000000, ptr %x, align 16
+ store x86_fp80 f0x400A8000000000000000, ptr %x, align 16
%load1 = load x86_fp80, ptr %a.addr, align 16
%load2 = load x86_fp80, ptr %x, align 16
%add = fadd x86_fp80 %load1, %load2
diff --git a/llvm/test/CodeGen/X86/atomic-nocx16.ll b/llvm/test/CodeGen/X86/atomic-nocx16.ll
index c854a21d30bc95..a1521be136bc0d 100644
--- a/llvm/test/CodeGen/X86/atomic-nocx16.ll
+++ b/llvm/test/CodeGen/X86/atomic-nocx16.ll
@@ -48,13 +48,13 @@ define void @test_fp(ptr %a) nounwind {
entry:
; CHECK: __atomic_exchange_16
; CHECK32: __atomic_exchange
- %0 = atomicrmw xchg ptr %a, fp128 0xL00000000000000004000900000000000 seq_cst
+ %0 = atomicrmw xchg ptr %a, fp128 f0x40009000000000000000000000000000 seq_cst
; CHECK: __atomic_compare_exchange_16
; CHECK32: __atomic_compare_exchange
- %1 = atomicrmw fadd ptr %a, fp128 0xL00000000000000004000900000000000 seq_cst
+ %1 = atomicrmw fadd ptr %a, fp128 f0x40009000000000000000000000000000 seq_cst
; CHECK: __atomic_compare_exchange_16
; CHECK32: __atomic_compare_exchange
- %2 = atomicrmw fsub ptr %a, fp128 0xL00000000000000004000900000000000 seq_cst
+ %2 = atomicrmw fsub ptr %a, fp128 f0x40009000000000000000000000000000 seq_cst
; CHECK: __atomic_load_16
; CHECK32: __atomic_load
%3 = load atomic fp128, ptr %a seq_cst, align 16
diff --git a/llvm/test/CodeGen/X86/avx10_2-cmp.ll b/llvm/test/CodeGen/X86/avx10_2-cmp.ll
index 140a20c17ea6de..adb9f5e6037d2e 100644
--- a/llvm/test/CodeGen/X86/avx10_2-cmp.ll
+++ b/llvm/test/CodeGen/X86/avx10_2-cmp.ll
@@ -265,9 +265,9 @@ define i32 @PR118606(x86_fp80 %val1) #0 {
; X86-NEXT: xorl %eax, %eax
; X86-NEXT: retl
entry:
- %cmp8 = fcmp oeq x86_fp80 %val1, 0xK00000000000000000000
- %0 = select i1 %cmp8, x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK00000000000000000000
- %cmp64 = fcmp ogt x86_fp80 %0, 0xK00000000000000000000
+ %cmp8 = fcmp oeq x86_fp80 %val1, f0x00000000000000000000
+ %0 = select i1 %cmp8, x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x00000000000000000000
+ %cmp64 = fcmp ogt x86_fp80 %0, f0x00000000000000000000
br i1 %cmp64, label %if.then66, label %if.end70
if.then66: ; preds = %entry
diff --git a/llvm/test/CodeGen/X86/avx512-insert-extract.ll b/llvm/test/CodeGen/X86/avx512-insert-extract.ll
index 7ce37c637a79ca..bdf243ac440a14 100644
--- a/llvm/test/CodeGen/X86/avx512-insert-extract.ll
+++ b/llvm/test/CodeGen/X86/avx512-insert-extract.ll
@@ -2206,7 +2206,7 @@ define void @test_concat_v2i1(ptr %arg, ptr %arg1, ptr %arg2) nounwind {
; SKX-NEXT: vzeroupper
; SKX-NEXT: retq
%tmp = load <2 x half>, ptr %arg, align 8
- %tmp3 = fcmp fast olt <2 x half> %tmp, <half 0xH4600, half 0xH4600>
+ %tmp3 = fcmp fast olt <2 x half> %tmp, <half f0x4600, half f0x4600>
%tmp4 = fcmp fast ogt <2 x half> %tmp, zeroinitializer
%tmp5 = and <2 x i1> %tmp3, %tmp4
%tmp6 = load <2 x half>, ptr %arg1, align 8
diff --git a/llvm/test/CodeGen/X86/avx512fp16-combine-shuffle-fma.ll b/llvm/test/CodeGen/X86/avx512fp16-combine-shuffle-fma.ll
index 54ccc23840f99c..f16e133b130c9a 100644
--- a/llvm/test/CodeGen/X86/avx512fp16-combine-shuffle-fma.ll
+++ b/llvm/test/CodeGen/X86/avx512fp16-combine-shuffle-fma.ll
@@ -47,7 +47,7 @@ define <2 x half> @foo(<2 x half> %0) "unsafe-fp-math"="true" nounwind {
; FP16-NEXT: vfmaddsub231ph {{\.?LCPI[0-9]+_[0-9]+}}(%rip), %xmm1, %xmm0
; FP16-NEXT: retq
%2 = shufflevector <2 x half> %0, <2 x half> undef, <2 x i32> <i32 1, i32 2>
- %3 = fmul fast <2 x half> %2, <half 0xH3D3A, half 0xH3854>
+ %3 = fmul fast <2 x half> %2, <half f0x3D3A, half f0x3854>
%4 = fsub fast <2 x half> %3, %0
%5 = fadd fast <2 x half> %3, %0
%6 = shufflevector <2 x half> %4, <2 x half> %5, <2 x i32> <i32 0, i32 3>
diff --git a/llvm/test/CodeGen/X86/avx512fp16-mov.ll b/llvm/test/CodeGen/X86/avx512fp16-mov.ll
index f4eb5b952ae436..0f13e212f29070 100644
--- a/llvm/test/CodeGen/X86/avx512fp16-mov.ll
+++ b/llvm/test/CodeGen/X86/avx512fp16-mov.ll
@@ -2015,7 +2015,7 @@ define <8 x half> @test21(half %a, half %b, half %c) nounwind {
; X86-NEXT: vpbroadcastw %xmm1, %xmm1
; X86-NEXT: vshufps {{.*#+}} xmm0 = xmm0[0,1],xmm1[0,0]
; X86-NEXT: retl
- %1 = insertelement <8 x half> <half poison, half poison, half poison, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000, half 0xH0000>, half %a, i32 0
+ %1 = insertelement <8 x half> <half poison, half poison, half poison, half f0x0000, half f0x0000, half f0x0000, half f0x0000, half f0x0000>, half %a, i32 0
%2 = insertelement <8 x half> %1, half %b, i32 1
%3 = insertelement <8 x half> %2, half %c, i32 2
ret <8 x half> %3
diff --git a/llvm/test/CodeGen/X86/bfloat-constrained.ll b/llvm/test/CodeGen/X86/bfloat-constrained.ll
index 081b1cebfc43d6..194cfa1508b590 100644
--- a/llvm/test/CodeGen/X86/bfloat-constrained.ll
+++ b/llvm/test/CodeGen/X86/bfloat-constrained.ll
@@ -3,9 +3,9 @@
; RUN: llc < %s -mtriple=x86_64-linux-gnu -mattr=+avx2 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-linux-gnu -mattr=+avx512bf16,+avx512vl | FileCheck %s --check-prefixes=X64
- at a = global bfloat 0xR0000, align 2
- at b = global bfloat 0xR0000, align 2
- at c = global bfloat 0xR0000, align 2
+ at a = global bfloat f0x0000, align 2
+ at b = global bfloat f0x0000, align 2
+ at c = global bfloat f0x0000, align 2
define float @bfloat_to_float() strictfp {
; X86-LABEL: bfloat_to_float:
diff --git a/llvm/test/CodeGen/X86/bfloat.ll b/llvm/test/CodeGen/X86/bfloat.ll
index a6b3e3fd1fd169..f1030409ab67cb 100644
--- a/llvm/test/CodeGen/X86/bfloat.ll
+++ b/llvm/test/CodeGen/X86/bfloat.ll
@@ -1012,7 +1012,7 @@ define <32 x bfloat> @pr63017_2() nounwind {
; AVXNC-NEXT: .LBB12_2:
; AVXNC-NEXT: vmovaps %ymm0, %ymm1
; AVXNC-NEXT: retq
- %1 = call <32 x bfloat> @llvm.masked.load.v32bf16.p0(ptr poison, i32 2, <32 x i1> poison, <32 x bfloat> <bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80, bfloat 0xRBF80>)
+ %1 = call <32 x bfloat> @llvm.masked.load.v32bf16.p0(ptr poison, i32 2, <32 x i1> poison, <32 x bfloat> <bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80, bfloat f0xBF80>)
ret <32 x bfloat> %1
}
diff --git a/llvm/test/CodeGen/X86/build_fp16_constant_vector.ll b/llvm/test/CodeGen/X86/build_fp16_constant_vector.ll
index 6cb449822145af..086563d003e43b 100644
--- a/llvm/test/CodeGen/X86/build_fp16_constant_vector.ll
+++ b/llvm/test/CodeGen/X86/build_fp16_constant_vector.ll
@@ -12,8 +12,8 @@ define dso_local <32 x half> @foo(<32 x half> %a, <32 x half> %b, <32 x half> %c
; CHECK-NEXT: vaddph %zmm1, %zmm0, %zmm0
; CHECK-NEXT: ret{{[l|q]}}
entry:
- %0 = tail call fast <32 x half> @llvm.fma.v32f16(<32 x half> %a, <32 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00>, <32 x half> %c)
- %1 = tail call fast <32 x half> @llvm.fma.v32f16(<32 x half> %b, <32 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00, half 0xHBC00>, <32 x half> %c)
+ %0 = tail call fast <32 x half> @llvm.fma.v32f16(<32 x half> %a, <32 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00>, <32 x half> %c)
+ %1 = tail call fast <32 x half> @llvm.fma.v32f16(<32 x half> %b, <32 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00, half f0xBC00>, <32 x half> %c)
%2 = fadd <32 x half> %0, %1
ret <32 x half> %2
}
diff --git a/llvm/test/CodeGen/X86/byval6.ll b/llvm/test/CodeGen/X86/byval6.ll
index e4ea5d9ab6cffa..987c489e59b830 100644
--- a/llvm/test/CodeGen/X86/byval6.ll
+++ b/llvm/test/CodeGen/X86/byval6.ll
@@ -2,8 +2,8 @@
; RUN: llc < %s -mcpu=generic -mtriple=i686-- | FileCheck %s
%struct.W = type { x86_fp80, x86_fp80 }
- at B = global %struct.W { x86_fp80 0xK4001A000000000000000, x86_fp80 0xK4001C000000000000000 }, align 32
- at .cpx = internal constant %struct.W { x86_fp80 0xK4001E000000000000000, x86_fp80 0xK40028000000000000000 }
+ at B = global %struct.W { x86_fp80 f0x4001A000000000000000, x86_fp80 f0x4001C000000000000000 }, align 32
+ at .cpx = internal constant %struct.W { x86_fp80 f0x4001E000000000000000, x86_fp80 f0x40028000000000000000 }
define i32 @main() nounwind {
; CHECK-LABEL: main:
diff --git a/llvm/test/CodeGen/X86/cmov-fp.ll b/llvm/test/CodeGen/X86/cmov-fp.ll
index 77665d083b7e3e..6a6a350b55d0aa 100644
--- a/llvm/test/CodeGen/X86/cmov-fp.ll
+++ b/llvm/test/CodeGen/X86/cmov-fp.ll
@@ -995,7 +995,7 @@ define x86_fp80 @test17(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp ugt i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1037,7 +1037,7 @@ define x86_fp80 @test18(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp uge i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1079,7 +1079,7 @@ define x86_fp80 @test19(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp ult i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1121,7 +1121,7 @@ define x86_fp80 @test20(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp ule i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1168,7 +1168,7 @@ define x86_fp80 @test21(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: retl
; We don't emit a branch for fp80, why?
%cmp = icmp sgt i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1214,7 +1214,7 @@ define x86_fp80 @test22(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp sge i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1260,7 +1260,7 @@ define x86_fp80 @test23(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp slt i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
@@ -1306,6 +1306,6 @@ define x86_fp80 @test24(i32 %a, i32 %b, x86_fp80 %x) nounwind {
; NOCMOV-NEXT: fstp %st(1)
; NOCMOV-NEXT: retl
%cmp = icmp sle i32 %a, %b
- %sel = select i1 %cmp, x86_fp80 0xK4005C600000000000000, x86_fp80 %x
+ %sel = select i1 %cmp, x86_fp80 f0x4005C600000000000000, x86_fp80 %x
ret x86_fp80 %sel
}
diff --git a/llvm/test/CodeGen/X86/coff-fp-section-name.ll b/llvm/test/CodeGen/X86/coff-fp-section-name.ll
index 4dc45cf2958f04..8c32174853a082 100644
--- a/llvm/test/CodeGen/X86/coff-fp-section-name.ll
+++ b/llvm/test/CodeGen/X86/coff-fp-section-name.ll
@@ -23,13 +23,13 @@ entry:
%o = alloca double, align 8
store i32 0, ptr %retval, align 4
- store fp128 0xLBB2C11D0AE2E087D73E717A35985531C, ptr %a, align 16
- store fp128 0xLBB2C11D0AE2E087D73E717A35985531C, ptr %b, align 16
- store fp128 0xL00000000000000004002000000000000, ptr %c, align 16
- store fp128 0xL00000000000000007FFF800000000000, ptr %d, align 16
- store fp128 0xL00000000000000007FFF000000000000, ptr %e, align 16
- store fp128 0xL00000000000000007FFF000000000000, ptr %f, align 16
- store fp128 0xL10000000000000003F66244CE242C556, ptr %g, align 16
+ store fp128 f0x73E717A35985531CBB2C11D0AE2E087D, ptr %a, align 16
+ store fp128 f0x73E717A35985531CBB2C11D0AE2E087D, ptr %b, align 16
+ store fp128 f0x40020000000000000000000000000000, ptr %c, align 16
+ store fp128 f0x7FFF8000000000000000000000000000, ptr %d, align 16
+ store fp128 f0x7FFF0000000000000000000000000000, ptr %e, align 16
+ store fp128 f0x7FFF0000000000000000000000000000, ptr %f, align 16
+ store fp128 f0x3F66244CE242C5561000000000000000, ptr %g, align 16
store float 0x3E212E0BE0000000, ptr %h, align 4
store float 8.000000e+00, ptr %i, align 4
store float 0x7FF8000000000000, ptr %j, align 4
diff --git a/llvm/test/CodeGen/X86/complex-fca.ll b/llvm/test/CodeGen/X86/complex-fca.ll
index 6b2ba462fa83e8..54ef2c62268e79 100644
--- a/llvm/test/CodeGen/X86/complex-fca.ll
+++ b/llvm/test/CodeGen/X86/complex-fca.ll
@@ -4,7 +4,7 @@ define void @ccosl(ptr noalias sret({ x86_fp80, x86_fp80 }) %agg.result, { x86_f
entry:
%z8 = extractvalue { x86_fp80, x86_fp80 } %z, 0
%z9 = extractvalue { x86_fp80, x86_fp80 } %z, 1
- %0 = fsub x86_fp80 0xK80000000000000000000, %z9
+ %0 = fsub x86_fp80 f0x80000000000000000000, %z9
%insert = insertvalue { x86_fp80, x86_fp80 } undef, x86_fp80 %0, 0
%insert7 = insertvalue { x86_fp80, x86_fp80 } %insert, x86_fp80 %z8, 1
call void @ccoshl(ptr noalias sret({ x86_fp80, x86_fp80 }) %agg.result, { x86_fp80, x86_fp80 } %insert7) nounwind
diff --git a/llvm/test/CodeGen/X86/fake-use-hpfloat.ll b/llvm/test/CodeGen/X86/fake-use-hpfloat.ll
index fd511a6179acfe..b0ee4716dbb60a 100644
--- a/llvm/test/CodeGen/X86/fake-use-hpfloat.ll
+++ b/llvm/test/CodeGen/X86/fake-use-hpfloat.ll
@@ -10,6 +10,6 @@ target triple = "x86_64-unknown-unknown"
define void @_Z6doTestv() local_unnamed_addr optdebug {
entry:
- tail call void (...) @llvm.fake.use(half 0xH0000)
+ tail call void (...) @llvm.fake.use(half f0x0000)
ret void
}
diff --git a/llvm/test/CodeGen/X86/float-asmprint.ll b/llvm/test/CodeGen/X86/float-asmprint.ll
index 879bcf39e59253..9dd3313de7cd45 100644
--- a/llvm/test/CodeGen/X86/float-asmprint.ll
+++ b/llvm/test/CodeGen/X86/float-asmprint.ll
@@ -3,9 +3,9 @@
; Check that all current floating-point types are correctly emitted to assembly
; on a little-endian target.
- at var128 = global fp128 0xL00000000000000008000000000000000, align 16
- at varppc128 = global ppc_fp128 0xM80000000000000000000000000000000, align 16
- at var80 = global x86_fp80 0xK80000000000000000000, align 16
+ at var128 = global fp128 f0x80000000000000000000000000000000, align 16
+ at varppc128 = global ppc_fp128 f0x00000000000000008000000000000000, align 16
+ at var80 = global x86_fp80 f0x80000000000000000000, align 16
@var64 = global double -0.0, align 8
@var32 = global float -0.0, align 4
@var16 = global half -0.0, align 2
diff --git a/llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll b/llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll
index 2163121410553f..3b1e536773a6b2 100644
--- a/llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll
+++ b/llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll
@@ -400,7 +400,7 @@ define <8 x half> @fmul_pow2_8xhalf(<8 x i16> %i) {
; CHECK-FMA-NEXT: retq
%p2 = shl <8 x i16> <i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1>, %i
%p2_f = uitofp <8 x i16> %p2 to <8 x half>
- %r = fmul <8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, %p2_f
+ %r = fmul <8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, %p2_f
ret <8 x half> %r
}
@@ -651,7 +651,7 @@ define <8 x half> @fmul_pow2_ldexp_8xhalf(<8 x i16> %i) {
; CHECK-AVX512F-NEXT: addq $72, %rsp
; CHECK-AVX512F-NEXT: .cfi_def_cfa_offset 8
; CHECK-AVX512F-NEXT: retq
- %r = call <8 x half> @llvm.ldexp.v8f16.v8i16(<8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, <8 x i16> %i)
+ %r = call <8 x half> @llvm.ldexp.v8f16.v8i16(<8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, <8 x i16> %i)
ret <8 x half> %r
}
@@ -672,7 +672,7 @@ define <8 x half> @fdiv_pow2_8xhalf(<8 x i16> %i) {
; CHECK-AVX-NEXT: retq
%p2 = shl <8 x i16> <i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1, i16 1>, %i
%p2_f = uitofp <8 x i16> %p2 to <8 x half>
- %r = fdiv <8 x half> <half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000, half 0xH7000>, %p2_f
+ %r = fdiv <8 x half> <half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000, half f0x7000>, %p2_f
ret <8 x half> %r
}
@@ -1448,7 +1448,7 @@ define half @fdiv_pow_shl_cnt_fail_out_of_bounds(i32 %cnt) nounwind {
; CHECK-FMA-NEXT: retq
%shl = shl nuw i32 1, %cnt
%conv = uitofp i32 %shl to half
- %mul = fdiv half 0xH7000, %conv
+ %mul = fdiv half f0x7000, %conv
ret half %mul
}
@@ -1470,7 +1470,7 @@ define half @fdiv_pow_shl_cnt_in_bounds(i16 %cnt) nounwind {
; CHECK-AVX-NEXT: retq
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH7000, %conv
+ %mul = fdiv half f0x7000, %conv
ret half %mul
}
@@ -1492,7 +1492,7 @@ define half @fdiv_pow_shl_cnt_in_bounds2(i16 %cnt) nounwind {
; CHECK-AVX-NEXT: retq
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH4800, %conv
+ %mul = fdiv half f0x4800, %conv
ret half %mul
}
@@ -1565,7 +1565,7 @@ define half @fdiv_pow_shl_cnt_fail_out_of_bound2(i16 %cnt) nounwind {
; CHECK-FMA-NEXT: retq
%shl = shl nuw i16 1, %cnt
%conv = uitofp i16 %shl to half
- %mul = fdiv half 0xH4000, %conv
+ %mul = fdiv half f0x4000, %conv
ret half %mul
}
diff --git a/llvm/test/CodeGen/X86/fp-stack-O0.ll b/llvm/test/CodeGen/X86/fp-stack-O0.ll
index d7b776838feebf..725cae0e389449 100644
--- a/llvm/test/CodeGen/X86/fp-stack-O0.ll
+++ b/llvm/test/CodeGen/X86/fp-stack-O0.ll
@@ -18,7 +18,7 @@ declare i32 @x2(x86_fp80, x86_fp80) nounwind
define i32 @test1() nounwind uwtable ssp {
entry:
%call = call x86_fp80 (...) @x1(i32 -1)
- %call1 = call i32 @x2(x86_fp80 %call, x86_fp80 0xK401EFFFFFFFF00000000)
+ %call1 = call i32 @x2(x86_fp80 %call, x86_fp80 f0x401EFFFFFFFF00000000)
ret i32 %call1
}
diff --git a/llvm/test/CodeGen/X86/fp128-calling-conv.ll b/llvm/test/CodeGen/X86/fp128-calling-conv.ll
index 8dc99a2431dbba..c48a3c1622f281 100644
--- a/llvm/test/CodeGen/X86/fp128-calling-conv.ll
+++ b/llvm/test/CodeGen/X86/fp128-calling-conv.ll
@@ -3,7 +3,7 @@
; RUN: llc < %s -O2 -mtriple=x86_64-linux-gnu -mattr=+mmx | FileCheck %s
; __float128 myFP128 = 1.0L; // x86_64-linux-android
- at myFP128 = global fp128 0xL00000000000000003FFF000000000000, align 16
+ at myFP128 = global fp128 f0x3FFF0000000000000000000000000000, align 16
; The first few parameters are passed in registers and the other are on stack.
diff --git a/llvm/test/CodeGen/X86/fp128-cast-strict.ll b/llvm/test/CodeGen/X86/fp128-cast-strict.ll
index f141153d059acb..abe0f1e705e46a 100644
--- a/llvm/test/CodeGen/X86/fp128-cast-strict.ll
+++ b/llvm/test/CodeGen/X86/fp128-cast-strict.ll
@@ -12,8 +12,8 @@
@vf16 = common dso_local global half 0.000000e+00, align 2
@vf32 = common dso_local global float 0.000000e+00, align 4
@vf64 = common dso_local global double 0.000000e+00, align 8
- at vf80 = common dso_local global x86_fp80 0xK00000000000000000000, align 8
- at vf128 = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
+ at vf80 = common dso_local global x86_fp80 f0x00000000000000000000, align 8
+ at vf128 = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
define dso_local void @TestFPExtF16_F128() nounwind strictfp {
; X64-SSE-LABEL: TestFPExtF16_F128:
diff --git a/llvm/test/CodeGen/X86/fp128-cast.ll b/llvm/test/CodeGen/X86/fp128-cast.ll
index 1de2484d47ba1b..79f4c6792342db 100644
--- a/llvm/test/CodeGen/X86/fp128-cast.ll
+++ b/llvm/test/CodeGen/X86/fp128-cast.ll
@@ -18,8 +18,8 @@
@vu128 = common dso_local global i128 0, align 16
@vf32 = common dso_local global float 0.000000e+00, align 4
@vf64 = common dso_local global double 0.000000e+00, align 8
- at vf80 = common dso_local global x86_fp80 0xK00000000000000000000, align 8
- at vf128 = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
+ at vf80 = common dso_local global x86_fp80 f0x00000000000000000000, align 8
+ at vf128 = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
define dso_local void @TestFPExtF32_F128() nounwind {
; X64-SSE-LABEL: TestFPExtF32_F128:
@@ -1046,7 +1046,7 @@ define dso_local i32 @TestConst128(fp128 %v) nounwind {
; X64-AVX-NEXT: popq %rcx
; X64-AVX-NEXT: retq
entry:
- %cmp = fcmp ogt fp128 %v, 0xL00000000000000003FFF000000000000
+ %cmp = fcmp ogt fp128 %v, f0x3FFF0000000000000000000000000000
%conv = zext i1 %cmp to i32
ret i32 %conv
}
@@ -1097,7 +1097,7 @@ define dso_local i32 @TestConst128Zero(fp128 %v) nounwind {
; X64-AVX-NEXT: popq %rcx
; X64-AVX-NEXT: retq
entry:
- %cmp = fcmp ogt fp128 %v, 0xL00000000000000000000000000000000
+ %cmp = fcmp ogt fp128 %v, f0x00000000000000000000000000000000
%conv = zext i1 %cmp to i32
ret i32 %conv
}
@@ -1376,7 +1376,7 @@ define i1 @PR34866(i128 %x) nounwind {
; X64-AVX-NEXT: orq %rsi, %rdi
; X64-AVX-NEXT: sete %al
; X64-AVX-NEXT: retq
- %bc_mmx = bitcast fp128 0xL00000000000000000000000000000000 to i128
+ %bc_mmx = bitcast fp128 f0x00000000000000000000000000000000 to i128
%cmp = icmp eq i128 %bc_mmx, %x
ret i1 %cmp
}
@@ -1411,7 +1411,7 @@ define i1 @PR34866_commute(i128 %x) nounwind {
; X64-AVX-NEXT: orq %rsi, %rdi
; X64-AVX-NEXT: sete %al
; X64-AVX-NEXT: retq
- %bc_mmx = bitcast fp128 0xL00000000000000000000000000000000 to i128
+ %bc_mmx = bitcast fp128 f0x00000000000000000000000000000000 to i128
%cmp = icmp eq i128 %x, %bc_mmx
ret i1 %cmp
}
diff --git a/llvm/test/CodeGen/X86/fp128-i128.ll b/llvm/test/CodeGen/X86/fp128-i128.ll
index f176a299c4e9be..8499c77bb20bc3 100644
--- a/llvm/test/CodeGen/X86/fp128-i128.ll
+++ b/llvm/test/CodeGen/X86/fp128-i128.ll
@@ -159,8 +159,8 @@ entry:
%0 = bitcast fp128 %x to i128
%bf.clear = and i128 %0, 170141183460469231731687303715884105727
%1 = bitcast i128 %bf.clear to fp128
- %cmp = fcmp olt fp128 %1, 0xL999999999999999A3FFB999999999999
- %cond = select i1 %cmp, fp128 0xL00000000000000003FFF000000000000, fp128 0xL00000000000000004000000000000000
+ %cmp = fcmp olt fp128 %1, f0x3FFB999999999999999999999999999A
+ %cond = select i1 %cmp, fp128 f0x3FFF0000000000000000000000000000, fp128 f0x40000000000000000000000000000000
ret fp128 %cond
}
@@ -272,7 +272,7 @@ entry:
br i1 %cmp, label %if.then, label %if.end
if.then: ; preds = %entry
- %mul = fmul fp128 %x, 0xL00000000000000004201000000000000
+ %mul = fmul fp128 %x, f0x42010000000000000000000000000000
%1 = bitcast fp128 %mul to i128
%bf.clear4 = and i128 %1, -170135991163610696904058773219554885633
%bf.set = or i128 %bf.clear4, 85060207136517546210586590865283612672
diff --git a/llvm/test/CodeGen/X86/fp128-libcalls.ll b/llvm/test/CodeGen/X86/fp128-libcalls.ll
index bb75ec10851197..9fc25c8c343e04 100644
--- a/llvm/test/CodeGen/X86/fp128-libcalls.ll
+++ b/llvm/test/CodeGen/X86/fp128-libcalls.ll
@@ -9,7 +9,7 @@
; Check all soft floating point library function calls.
@vf64 = common dso_local global double 0.000000e+00, align 8
- at vf128 = common dso_local global fp128 0xL00000000000000000000000000000000, align 16
+ at vf128 = common dso_local global fp128 f0x00000000000000000000000000000000, align 16
define dso_local void @Test128Add(fp128 %d1, fp128 %d2) nounwind {
; CHECK-LABEL: Test128Add:
diff --git a/llvm/test/CodeGen/X86/fp128-load.ll b/llvm/test/CodeGen/X86/fp128-load.ll
index 7b106c0d5e7280..6b38de5271fea9 100644
--- a/llvm/test/CodeGen/X86/fp128-load.ll
+++ b/llvm/test/CodeGen/X86/fp128-load.ll
@@ -5,7 +5,7 @@
; RUN: -enable-legalize-types-checking | FileCheck %s
; __float128 myFP128 = 1.0L; // x86_64-linux-android
- at my_fp128 = dso_local global fp128 0xL00000000000000003FFF000000000000, align 16
+ at my_fp128 = dso_local global fp128 f0x3FFF0000000000000000000000000000, align 16
define fp128 @get_fp128() {
; CHECK-LABEL: get_fp128:
diff --git a/llvm/test/CodeGen/X86/fp128-select.ll b/llvm/test/CodeGen/X86/fp128-select.ll
index 0486c1c4d28e95..5d7f6a4d61b1f3 100644
--- a/llvm/test/CodeGen/X86/fp128-select.ll
+++ b/llvm/test/CodeGen/X86/fp128-select.ll
@@ -33,7 +33,7 @@ define void @test_select(ptr %p, ptr %q, i1 zeroext %c) {
; NOSSE-NEXT: movq %rax, (%rsi)
; NOSSE-NEXT: retq
%a = load fp128, ptr %p, align 2
- %r = select i1 %c, fp128 %a, fp128 0xL00000000000000007FFF800000000000
+ %r = select i1 %c, fp128 %a, fp128 f0x7FFF8000000000000000000000000000
store fp128 %r, ptr %q
ret void
}
diff --git a/llvm/test/CodeGen/X86/fp128-store.ll b/llvm/test/CodeGen/X86/fp128-store.ll
index e93771442b74a7..e8750d02cbb61a 100644
--- a/llvm/test/CodeGen/X86/fp128-store.ll
+++ b/llvm/test/CodeGen/X86/fp128-store.ll
@@ -3,7 +3,7 @@
; RUN: llc < %s -O2 -mtriple=x86_64-linux-gnu -mattr=+mmx | FileCheck %s
; __float128 myFP128 = 1.0L; // x86_64-linux-android
- at myFP128 = dso_local global fp128 0xL00000000000000003FFF000000000000, align 16
+ at myFP128 = dso_local global fp128 f0x3FFF0000000000000000000000000000, align 16
define dso_local void @set_FP128(fp128 %x) {
; CHECK-LABEL: set_FP128:
diff --git a/llvm/test/CodeGen/X86/half-constrained.ll b/llvm/test/CodeGen/X86/half-constrained.ll
index eae9b25e43e06f..32032e5e74e453 100644
--- a/llvm/test/CodeGen/X86/half-constrained.ll
+++ b/llvm/test/CodeGen/X86/half-constrained.ll
@@ -4,9 +4,9 @@
; RUN: llc < %s -mtriple=x86_64-linux-gnu | FileCheck %s --check-prefix=X64-NOF16C
; RUN: llc < %s -mtriple=x86_64-linux-gnu -mattr=f16c | FileCheck %s --check-prefix=X64-F16C
- at a = global half 0xH0000, align 2
- at b = global half 0xH0000, align 2
- at c = global half 0xH0000, align 2
+ at a = global half f0x0000, align 2
+ at b = global half f0x0000, align 2
+ at c = global half f0x0000, align 2
define float @half_to_float() strictfp {
; X86-NOF16C-LABEL: half_to_float:
diff --git a/llvm/test/CodeGen/X86/half.ll b/llvm/test/CodeGen/X86/half.ll
index 033cadae6a1e70..2a7d96f5103199 100644
--- a/llvm/test/CodeGen/X86/half.ll
+++ b/llvm/test/CodeGen/X86/half.ll
@@ -946,7 +946,7 @@ define half @PR40273(half) #0 {
; CHECK-I686-NEXT: pinsrw $0, %eax, %xmm0
; CHECK-I686-NEXT: addl $12, %esp
; CHECK-I686-NEXT: retl
- %2 = fcmp une half %0, 0xH0000
+ %2 = fcmp une half %0, f0x0000
%3 = uitofp i1 %2 to half
ret half %3
}
@@ -1002,7 +1002,7 @@ define void @brcond(half %0) #0 {
; CHECK-I686-NEXT: retl
; CHECK-I686-NEXT: .LBB18_2: # %if.end
entry:
- %cmp = fcmp oeq half 0xH0000, %0
+ %cmp = fcmp oeq half f0x0000, %0
br i1 %cmp, label %if.then, label %if.end
if.then: ; preds = %entry
@@ -1115,7 +1115,7 @@ define void @main.158() #0 {
entry:
%0 = tail call half @llvm.fabs.f16(half undef)
%1 = fpext half %0 to float
- %compare.2 = fcmp ole half %0, 0xH4800
+ %compare.2 = fcmp ole half %0, f0x4800
%multiply.95 = fmul float %1, 5.000000e-01
%add.82 = fadd float %multiply.95, -2.000000e+00
%multiply.68 = fmul float %add.82, 0.000000e+00
@@ -1217,7 +1217,7 @@ entry:
%4 = select i1 undef, <4 x i16> undef, <4 x i16> %3
%5 = select <4 x i1> undef, <4 x i16> undef, <4 x i16> %4
%6 = bitcast <4 x i16> %5 to <4 x half>
- %7 = select <4 x i1> %2, <4 x half> <half 0xH7E00, half 0xH7E00, half 0xH7E00, half 0xH7E00>, <4 x half> %6
+ %7 = select <4 x i1> %2, <4 x half> <half f0x7E00, half f0x7E00, half f0x7E00, half f0x7E00>, <4 x half> %6
store <4 x half> %7, ptr undef, align 16
ret void
}
@@ -2154,7 +2154,7 @@ define void @pr63114() {
; CHECK-I686-NEXT: retl
%1 = load <24 x half>, ptr poison, align 2
%2 = shufflevector <24 x half> %1, <24 x half> poison, <8 x i32> <i32 2, i32 5, i32 8, i32 11, i32 14, i32 17, i32 20, i32 23>
- %3 = shufflevector <8 x half> %2, <8 x half> <half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00, half 0xH3C00>, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
+ %3 = shufflevector <8 x half> %2, <8 x half> <half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00, half f0x3C00>, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
%4 = shufflevector <16 x half> poison, <16 x half> %3, <32 x i32> <i32 0, i32 8, i32 16, i32 24, i32 1, i32 9, i32 17, i32 25, i32 2, i32 10, i32 18, i32 26, i32 3, i32 11, i32 19, i32 27, i32 4, i32 12, i32 20, i32 28, i32 5, i32 13, i32 21, i32 29, i32 6, i32 14, i32 22, i32 30, i32 7, i32 15, i32 23, i32 31>
store <32 x half> %4, ptr null, align 2
ret void
diff --git a/llvm/test/CodeGen/X86/inline-asm-fpstack.ll b/llvm/test/CodeGen/X86/inline-asm-fpstack.ll
index 2d8ad6d645bc05..e94d67e44ce338 100644
--- a/llvm/test/CodeGen/X86/inline-asm-fpstack.ll
+++ b/llvm/test/CodeGen/X86/inline-asm-fpstack.ll
@@ -226,12 +226,12 @@ define void @testPR4485(ptr %a) nounwind {
; CHECK-NEXT: retl
entry:
%0 = load x86_fp80, ptr %a, align 16
- %1 = fmul x86_fp80 %0, 0xK4006B400000000000000
- %2 = fmul x86_fp80 %1, 0xK4012F424000000000000
+ %1 = fmul x86_fp80 %0, f0x4006B400000000000000
+ %2 = fmul x86_fp80 %1, f0x4012F424000000000000
tail call void asm sideeffect "fistpl $0", "{st},~{st}"(x86_fp80 %2)
%3 = load x86_fp80, ptr %a, align 16
- %4 = fmul x86_fp80 %3, 0xK4006B400000000000000
- %5 = fmul x86_fp80 %4, 0xK4012F424000000000000
+ %4 = fmul x86_fp80 %3, f0x4006B400000000000000
+ %5 = fmul x86_fp80 %4, f0x4012F424000000000000
tail call void asm sideeffect "fistpl $0", "{st},~{st}"(x86_fp80 %5)
ret void
}
diff --git a/llvm/test/CodeGen/X86/isel-x87.ll b/llvm/test/CodeGen/X86/isel-x87.ll
index 690c1f6ea968cb..91ff261b3a132d 100644
--- a/llvm/test/CodeGen/X86/isel-x87.ll
+++ b/llvm/test/CodeGen/X86/isel-x87.ll
@@ -67,7 +67,7 @@ define x86_fp80 @f0(x86_fp80 noundef %a) nounwind {
%a.addr = alloca x86_fp80, align 16
%x = alloca x86_fp80, align 16
store x86_fp80 %a, ptr %a.addr, align 16
- store x86_fp80 0xK400A8000000000000000, ptr %x, align 16
+ store x86_fp80 f0x400A8000000000000000, ptr %x, align 16
%load1 = load x86_fp80, ptr %a.addr, align 16
%load2 = load x86_fp80, ptr %x, align 16
%add = fadd x86_fp80 %load1, %load2
diff --git a/llvm/test/CodeGen/X86/ldzero.ll b/llvm/test/CodeGen/X86/ldzero.ll
index e4385af17fe4f8..fc9b5a0f4b139e 100644
--- a/llvm/test/CodeGen/X86/ldzero.ll
+++ b/llvm/test/CodeGen/X86/ldzero.ll
@@ -29,7 +29,7 @@ entry:
%tmp = alloca double, align 8 ; <ptr> [#uses=2]
%ld = alloca x86_fp80, align 16 ; <ptr> [#uses=2]
%"alloca point" = bitcast i32 0 to i32 ; <i32> [#uses=0]
- store x86_fp80 0xK00000000000000000000, ptr %ld, align 16
+ store x86_fp80 f0x00000000000000000000, ptr %ld, align 16
%tmp1 = load x86_fp80, ptr %ld, align 16 ; <x86_fp80> [#uses=1]
%tmp12 = fptrunc x86_fp80 %tmp1 to double ; <double> [#uses=1]
store double %tmp12, ptr %tmp, align 8
diff --git a/llvm/test/CodeGen/X86/mcu-abi.ll b/llvm/test/CodeGen/X86/mcu-abi.ll
index 53c228943d9148..d7a4e4fd7e4629 100644
--- a/llvm/test/CodeGen/X86/mcu-abi.ll
+++ b/llvm/test/CodeGen/X86/mcu-abi.ll
@@ -177,7 +177,7 @@ define void @test_alignment_fp() #0 {
; CHECK-NEXT: retl
entry:
%f = alloca fp128
- store fp128 0xL00000000000000004000000000000000, ptr %f
+ store fp128 f0x40000000000000000000000000000000, ptr %f
call void @foofp(ptr inreg %f)
ret void
}
diff --git a/llvm/test/CodeGen/X86/pr114520.ll b/llvm/test/CodeGen/X86/pr114520.ll
index c557da6b3ab8cb..8de7113cf1190e 100644
--- a/llvm/test/CodeGen/X86/pr114520.ll
+++ b/llvm/test/CodeGen/X86/pr114520.ll
@@ -13,8 +13,8 @@ define half @test1(half %x) {
; CHECK-NEXT: vpinsrw $0, %ecx, %xmm0, %xmm0
; CHECK-NEXT: retq
entry:
- %cmp2 = fcmp ogt half %x, 0xHFC00
- %cond.v = select i1 %cmp2, half %x, half 0xHFC00
+ %cmp2 = fcmp ogt half %x, f0xFC00
+ %cond.v = select i1 %cmp2, half %x, half f0xFC00
ret half %cond.v
}
@@ -31,7 +31,7 @@ define <8 x half> @test2(<8 x half> %x) {
; CHECK-NEXT: vzeroupper
; CHECK-NEXT: retq
entry:
- %cmp2 = fcmp ogt <8 x half> %x, splat (half 0xHFC00)
- %cond.v = select <8 x i1> %cmp2, <8 x half> %x, <8 x half> splat (half 0xHFC00)
+ %cmp2 = fcmp ogt <8 x half> %x, splat (half f0xFC00)
+ %cond.v = select <8 x i1> %cmp2, <8 x half> %x, <8 x half> splat (half f0xFC00)
ret <8 x half> %cond.v
}
diff --git a/llvm/test/CodeGen/X86/pr13577.ll b/llvm/test/CodeGen/X86/pr13577.ll
index 3b8a05ef30f81d..779d80bd9da57a 100644
--- a/llvm/test/CodeGen/X86/pr13577.ll
+++ b/llvm/test/CodeGen/X86/pr13577.ll
@@ -17,7 +17,7 @@ define x86_fp80 @foo(x86_fp80 %a) {
; CHECK-NEXT: fcmovne %st(1), %st
; CHECK-NEXT: fstp %st(1)
; CHECK-NEXT: retq
- %1 = tail call x86_fp80 @copysignl(x86_fp80 0xK7FFF8000000000000000, x86_fp80 %a) nounwind readnone
+ %1 = tail call x86_fp80 @copysignl(x86_fp80 f0x7FFF8000000000000000, x86_fp80 %a) nounwind readnone
ret x86_fp80 %1
}
diff --git a/llvm/test/CodeGen/X86/pr33349.ll b/llvm/test/CodeGen/X86/pr33349.ll
index c879cb9867ab29..06be01df8cb69a 100644
--- a/llvm/test/CodeGen/X86/pr33349.ll
+++ b/llvm/test/CodeGen/X86/pr33349.ll
@@ -69,7 +69,7 @@ target triple = "x86_64-unknown-linux-gnu"
; SKX-NEXT: fstpt 20(%rdi)
; SKX-NEXT: retq
bb:
- %tmp = select <4 x i1> %m, <4 x x86_fp80> <x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000>, <4 x x86_fp80> zeroinitializer
+ %tmp = select <4 x i1> %m, <4 x x86_fp80> <x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000>, <4 x x86_fp80> zeroinitializer
store <4 x x86_fp80> %tmp, ptr %p, align 16
ret void
}
diff --git a/llvm/test/CodeGen/X86/pr34080.ll b/llvm/test/CodeGen/X86/pr34080.ll
index 436b54db333b34..54e81205e139c1 100644
--- a/llvm/test/CodeGen/X86/pr34080.ll
+++ b/llvm/test/CodeGen/X86/pr34080.ll
@@ -148,14 +148,14 @@ entry:
store double %conv1, ptr %tx, align 16
%conv4 = fpext double %conv1 to x86_fp80
%sub = fsub x86_fp80 %z, %conv4
- %mul = fmul x86_fp80 %sub, 0xK40178000000000000000
+ %mul = fmul x86_fp80 %sub, f0x40178000000000000000
%conv.1 = fptosi x86_fp80 %mul to i32
%conv1.1 = sitofp i32 %conv.1 to double
%arrayidx.1 = getelementptr inbounds [3 x double], ptr %tx, i64 0, i64 1
store double %conv1.1, ptr %arrayidx.1, align 8
%conv4.1 = fpext double %conv1.1 to x86_fp80
%sub.1 = fsub x86_fp80 %mul, %conv4.1
- %mul.1 = fmul x86_fp80 %sub.1, 0xK40178000000000000000
+ %mul.1 = fmul x86_fp80 %sub.1, f0x40178000000000000000
%conv5 = fptrunc x86_fp80 %mul.1 to double
%arrayidx6 = getelementptr inbounds [3 x double], ptr %tx, i64 0, i64 2
store double %conv5, ptr %arrayidx6, align 16
diff --git a/llvm/test/CodeGen/X86/pr34177.ll b/llvm/test/CodeGen/X86/pr34177.ll
index 5b2431eb214955..d87f854d4e6f05 100644
--- a/llvm/test/CodeGen/X86/pr34177.ll
+++ b/llvm/test/CodeGen/X86/pr34177.ll
@@ -85,7 +85,7 @@ define void @test(<4 x i64> %a, <4 x x86_fp80> %b, ptr %c) local_unnamed_addr {
; AVX512VL-NEXT: fadd %st, %st(0)
; AVX512VL-NEXT: fstpt 40(%rdi)
%1 = icmp eq <4 x i64> <i64 0, i64 1, i64 2, i64 3>, %a
- %2 = select <4 x i1> %1, <4 x x86_fp80> <x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000, x86_fp80 0xK3FFF8000000000000000>, <4 x x86_fp80> zeroinitializer
+ %2 = select <4 x i1> %1, <4 x x86_fp80> <x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000, x86_fp80 f0x3FFF8000000000000000>, <4 x x86_fp80> zeroinitializer
%3 = fadd <4 x x86_fp80> %2, %2
%4 = shufflevector <4 x x86_fp80> %3, <4 x x86_fp80> %b, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7>
store <8 x x86_fp80> %4, ptr %c, align 16
diff --git a/llvm/test/CodeGen/X86/pr40529.ll b/llvm/test/CodeGen/X86/pr40529.ll
index a0ab4b5ffb6359..24eabbeb4ff26d 100644
--- a/llvm/test/CodeGen/X86/pr40529.ll
+++ b/llvm/test/CodeGen/X86/pr40529.ll
@@ -34,10 +34,10 @@ entry:
%conv = fptosi x86_fp80 %z to i32
%conv1 = sitofp i32 %conv to x86_fp80
%sub = fsub x86_fp80 %z, %conv1
- %mul = fmul x86_fp80 %sub, 0xK40178000000000000000
+ %mul = fmul x86_fp80 %sub, f0x40178000000000000000
%conv2 = fptosi x86_fp80 %mul to i32
%conv3 = sitofp i32 %conv2 to x86_fp80
%sub4 = fsub x86_fp80 %mul, %conv3
- %mul5 = fmul x86_fp80 %sub4, 0xK40178000000000000000
+ %mul5 = fmul x86_fp80 %sub4, f0x40178000000000000000
ret x86_fp80 %mul5
}
diff --git a/llvm/test/CodeGen/X86/pr43157.ll b/llvm/test/CodeGen/X86/pr43157.ll
index 2b333782f43a0a..ca5383301527c3 100644
--- a/llvm/test/CodeGen/X86/pr43157.ll
+++ b/llvm/test/CodeGen/X86/pr43157.ll
@@ -14,7 +14,7 @@ define void @foo(fp128 %x) {
; CHECK-NEXT: .cfi_def_cfa_offset 8
; CHECK-NEXT: retq
entry:
- %mul = fmul fp128 %x, 0xL00000000000000003FFF800000000000
+ %mul = fmul fp128 %x, f0x3FFF8000000000000000000000000000
tail call void asm sideeffect "", "x,~{dirflag},~{fpsr},~{flags}"(fp128 %mul)
ret void
}
diff --git a/llvm/test/CodeGen/X86/pr91005.ll b/llvm/test/CodeGen/X86/pr91005.ll
index 97fd1ce4568826..0a6da00ea390de 100644
--- a/llvm/test/CodeGen/X86/pr91005.ll
+++ b/llvm/test/CodeGen/X86/pr91005.ll
@@ -28,7 +28,7 @@ common.ret: ; preds = %4, %1
ret void
4: ; preds = %1
- %5 = select <2 x i1> %3, <2 x half> <half 0xH3C00, half 0xH3C00>, <2 x half> zeroinitializer
+ %5 = select <2 x i1> %3, <2 x half> <half f0x3C00, half f0x3C00>, <2 x half> zeroinitializer
%6 = fmul <2 x half> %5, zeroinitializer
%7 = fsub <2 x half> %6, zeroinitializer
%8 = extractelement <2 x half> %7, i64 0
diff --git a/llvm/test/CodeGen/X86/select.ll b/llvm/test/CodeGen/X86/select.ll
index d2e7a61bafb1c4..97ce316d1117cd 100644
--- a/llvm/test/CodeGen/X86/select.ll
+++ b/llvm/test/CodeGen/X86/select.ll
@@ -426,7 +426,7 @@ define x86_fp80 @test7(i32 %tmp8) nounwind {
; MCU-NEXT: fldt {{\.?LCPI[0-9]+_[0-9]+}}(%eax)
; MCU-NEXT: retl
%tmp9 = icmp sgt i32 %tmp8, -1
- %retval = select i1 %tmp9, x86_fp80 0xK4005B400000000000000, x86_fp80 0xK40078700000000000000
+ %retval = select i1 %tmp9, x86_fp80 f0x4005B400000000000000, x86_fp80 f0x40078700000000000000
ret x86_fp80 %retval
}
diff --git a/llvm/test/CodeGen/X86/shrink-fp-const2.ll b/llvm/test/CodeGen/X86/shrink-fp-const2.ll
index 8a2a3e3f185e6a..f304043c0fbc31 100644
--- a/llvm/test/CodeGen/X86/shrink-fp-const2.ll
+++ b/llvm/test/CodeGen/X86/shrink-fp-const2.ll
@@ -7,6 +7,6 @@ define x86_fp80 @test2() nounwind {
; CHECK-NEXT: flds {{\.?LCPI[0-9]+_[0-9]+}}
; CHECK-NEXT: retl
entry:
- ret x86_fp80 0xK3FFFC000000000000000
+ ret x86_fp80 f0x3FFFC000000000000000
}
diff --git a/llvm/test/CodeGen/X86/soft-fp-legal-in-HW-reg.ll b/llvm/test/CodeGen/X86/soft-fp-legal-in-HW-reg.ll
index f2b0a6e1863052..289a0a045c79a2 100644
--- a/llvm/test/CodeGen/X86/soft-fp-legal-in-HW-reg.ll
+++ b/llvm/test/CodeGen/X86/soft-fp-legal-in-HW-reg.ll
@@ -35,7 +35,7 @@ define fp128 @TestSelect(fp128 %a, fp128 %b) {
; CHECK-NEXT: retq
%cmp = fcmp ogt fp128 %a, %b
%sub = fsub fp128 %a, %b
- %res = select i1 %cmp, fp128 %sub, fp128 0xL00000000000000000000000000000000
+ %res = select i1 %cmp, fp128 %sub, fp128 f0x00000000000000000000000000000000
ret fp128 %res
}
@@ -71,6 +71,6 @@ define fp128 @TestFneg(fp128 %a) {
; CHECK-NEXT: .cfi_def_cfa_offset 8
; CHECK-NEXT: retq
%mul = fmul fp128 %a, %a
- %res = fsub fp128 0xL00000000000000008000000000000000, %mul
+ %res = fsub fp128 f0x80000000000000000000000000000000, %mul
ret fp128 %res
}
diff --git a/llvm/test/CodeGen/X86/sse-fcopysign.ll b/llvm/test/CodeGen/X86/sse-fcopysign.ll
index 3eadcad145b65d..f4b1dbd8095893 100644
--- a/llvm/test/CodeGen/X86/sse-fcopysign.ll
+++ b/llvm/test/CodeGen/X86/sse-fcopysign.ll
@@ -261,7 +261,7 @@ define void @PR41749() {
; X64-NEXT: fstp %st(1)
; X64-NEXT: fstpt (%rax)
; X64-NEXT: retq
- %1 = call x86_fp80 @llvm.copysign.f80(x86_fp80 0xK00000000000000000000, x86_fp80 undef)
+ %1 = call x86_fp80 @llvm.copysign.f80(x86_fp80 f0x00000000000000000000, x86_fp80 undef)
store x86_fp80 %1, ptr undef, align 16
ret void
}
diff --git a/llvm/test/CodeGen/X86/win64-long-double.ll b/llvm/test/CodeGen/X86/win64-long-double.ll
index 94559c9005023c..79978df5b4c5cd 100644
--- a/llvm/test/CodeGen/X86/win64-long-double.ll
+++ b/llvm/test/CodeGen/X86/win64-long-double.ll
@@ -1,6 +1,6 @@
; RUN: llc -mtriple x86_64-w64-mingw32 %s -o - | FileCheck %s
- at glob = common dso_local local_unnamed_addr global x86_fp80 0xK00000000000000000000, align 16
+ at glob = common dso_local local_unnamed_addr global x86_fp80 f0x00000000000000000000, align 16
define dso_local void @call() {
entry:
diff --git a/llvm/test/CodeGen/X86/x86-32-intrcc.ll b/llvm/test/CodeGen/X86/x86-32-intrcc.ll
index a0f937e2c323b6..198afbb3acb9f5 100644
--- a/llvm/test/CodeGen/X86/x86-32-intrcc.ll
+++ b/llvm/test/CodeGen/X86/x86-32-intrcc.ll
@@ -144,7 +144,7 @@ define x86_intrcc void @test_isr_clobbers(ptr byval(%struct.interrupt_frame) %fr
ret void
}
- at f80 = common global x86_fp80 0xK00000000000000000000, align 4
+ at f80 = common global x86_fp80 f0x00000000000000000000, align 4
; Test that the presence of x87 does not crash the FP stackifier
define x86_intrcc void @test_isr_x87(ptr byval(%struct.interrupt_frame) %frame) nounwind {
@@ -175,7 +175,7 @@ define x86_intrcc void @test_isr_x87(ptr byval(%struct.interrupt_frame) %frame)
; CHECK0-NEXT: iretl
entry:
%ld = load x86_fp80, ptr @f80, align 4
- %add = fadd x86_fp80 %ld, 0xK3FFF8000000000000000
+ %add = fadd x86_fp80 %ld, f0x3FFF8000000000000000
store x86_fp80 %add, ptr @f80, align 4
ret void
}
diff --git a/llvm/test/CodeGen/X86/x86-64-intrcc.ll b/llvm/test/CodeGen/X86/x86-64-intrcc.ll
index 5fc606eb566ea6..512885987d3025 100644
--- a/llvm/test/CodeGen/X86/x86-64-intrcc.ll
+++ b/llvm/test/CodeGen/X86/x86-64-intrcc.ll
@@ -90,7 +90,7 @@ define x86_intrcc void @test_isr_clobbers(ptr byval(%struct.interrupt_frame) %fr
ret void
}
- at f80 = common dso_local global x86_fp80 0xK00000000000000000000, align 4
+ at f80 = common dso_local global x86_fp80 f0x00000000000000000000, align 4
; Test that the presence of x87 does not crash the FP stackifier
define x86_intrcc void @test_isr_x87(ptr byval(%struct.interrupt_frame) %frame) {
@@ -102,7 +102,7 @@ define x86_intrcc void @test_isr_x87(ptr byval(%struct.interrupt_frame) %frame)
; CHECK-NEXT: iretq
entry:
%ld = load x86_fp80, ptr @f80, align 4
- %add = fadd x86_fp80 %ld, 0xK3FFF8000000000000000
+ %add = fadd x86_fp80 %ld, f0x3FFF8000000000000000
store x86_fp80 %add, ptr @f80, align 4
ret void
}
diff --git a/llvm/test/DebugInfo/COFF/AArch64/codeview-b-register.mir b/llvm/test/DebugInfo/COFF/AArch64/codeview-b-register.mir
index f953509be32152..222aa2cd532ac8 100644
--- a/llvm/test/DebugInfo/COFF/AArch64/codeview-b-register.mir
+++ b/llvm/test/DebugInfo/COFF/AArch64/codeview-b-register.mir
@@ -29,7 +29,7 @@
define internal fastcc i1 @test.fn(half %0) !dbg !4 {
Entry:
call void @llvm.dbg.value(metadata half %0, metadata !11, metadata !DIExpression()), !dbg !13
- %1 = fcmp une half 0xH0000, %0, !dbg !14
+ %1 = fcmp une half f0x0000, %0, !dbg !14
ret i1 %1
}
diff --git a/llvm/test/DebugInfo/COFF/AArch64/codeview-h-register.mir b/llvm/test/DebugInfo/COFF/AArch64/codeview-h-register.mir
index 515a4bcf8f46a5..0186015ca7977a 100644
--- a/llvm/test/DebugInfo/COFF/AArch64/codeview-h-register.mir
+++ b/llvm/test/DebugInfo/COFF/AArch64/codeview-h-register.mir
@@ -27,7 +27,7 @@
define internal fastcc i1 @test.fn(half %0) !dbg !4 {
Entry:
call void @llvm.dbg.value(metadata half %0, metadata !11, metadata !DIExpression()), !dbg !13
- %1 = fcmp une half 0xH0000, %0, !dbg !14
+ %1 = fcmp une half f0x0000, %0, !dbg !14
ret i1 %1
}
diff --git a/llvm/test/DebugInfo/COFF/fortran-basic.ll b/llvm/test/DebugInfo/COFF/fortran-basic.ll
index 1a442355538205..2d7cdb5a444a98 100644
--- a/llvm/test/DebugInfo/COFF/fortran-basic.ll
+++ b/llvm/test/DebugInfo/COFF/fortran-basic.ll
@@ -117,7 +117,7 @@ alloca_0:
call void @llvm.for.cpystr.i64.i64.i64(ptr getelementptr inbounds ([18 x i8], ptr @COM, i32 0, i64 12), i64 6, ptr @strlit, i64 3, i64 0, i1 false), !dbg !47
store %complex_64bit { float 0x40219999A0000000, float 0x3FF19999A0000000 }, ptr %"ARRAY$CMP8", align 8, !dbg !48
store %complex_128bit { double 0x403028F5C0000000, double 0x40019999A0000000 }, ptr %"ARRAY$CMP16", align 8, !dbg !49
- store %complex_256bit { fp128 0xL00000000000000004004028F5C000000, fp128 0xL00000000000000004000A66666000000 }, ptr %"ARRAY$CMP32", align 16, !dbg !50
+ store %complex_256bit { fp128 f0x4004028F5C0000000000000000000000, fp128 f0x4000A666660000000000000000000000 }, ptr %"ARRAY$CMP32", align 16, !dbg !50
ret void, !dbg !51
}
diff --git a/llvm/test/DebugInfo/MIR/InstrRef/x86-fp-stackifier-drop-locations.mir b/llvm/test/DebugInfo/MIR/InstrRef/x86-fp-stackifier-drop-locations.mir
index be12082c45b9ab..8c0b9606c7eeed 100644
--- a/llvm/test/DebugInfo/MIR/InstrRef/x86-fp-stackifier-drop-locations.mir
+++ b/llvm/test/DebugInfo/MIR/InstrRef/x86-fp-stackifier-drop-locations.mir
@@ -30,7 +30,7 @@
target datalayout = "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-f64:32:64-f80:32-n8:16:32-S128"
target triple = "i386-unknown-linux-gnu"
- @glob = dso_local local_unnamed_addr global x86_fp80 0xK3FFF9DF3B645A1CAC000, align 4, !dbg !0
+ @glob = dso_local local_unnamed_addr global x86_fp80 f0x3FFF9DF3B645A1CAC000, align 4, !dbg !0
; Function Attrs: nounwind
define dso_local x86_fp80 @foo(x86_fp80 %a, x86_fp80 %b, x86_fp80 %c) local_unnamed_addr !dbg !13 {
@@ -46,7 +46,7 @@
call void @llvm.dbg.value(metadata x86_fp80 %mul, metadata !17, metadata !DIExpression()), !dbg !20
%call2 = tail call x86_fp80 @ext() #3, !dbg !24
call void @llvm.dbg.value(metadata x86_fp80 undef, metadata !18, metadata !DIExpression()), !dbg !20
- %cmp = fcmp olt x86_fp80 %mul, 0xK4001A000000000000000, !dbg !25
+ %cmp = fcmp olt x86_fp80 %mul, f0x4001A000000000000000, !dbg !25
%0 = load x86_fp80, ptr @glob, align 4, !dbg !27
%add3 = fadd x86_fp80 %mul, %0, !dbg !27
%a.addr.0 = select i1 %cmp, x86_fp80 %add3, x86_fp80 %mul, !dbg !27
diff --git a/llvm/test/DebugInfo/Sparc/entry-value-complex-reg-expr.ll b/llvm/test/DebugInfo/Sparc/entry-value-complex-reg-expr.ll
index 952619fcc1a054..0ab9d1365a4039 100644
--- a/llvm/test/DebugInfo/Sparc/entry-value-complex-reg-expr.ll
+++ b/llvm/test/DebugInfo/Sparc/entry-value-complex-reg-expr.ll
@@ -40,7 +40,7 @@ target triple = "sparc64"
; CHECK-NEXT: .xword 0
; CHECK-NEXT: .xword 0
- at global = common global fp128 0xL00000000000000000000000000000000, align 16, !dbg !0
+ at global = common global fp128 f0x00000000000000000000000000000000, align 16, !dbg !0
; Function Attrs: nounwind
define signext i32 @foo(fp128 %p) #0 !dbg !12 {
diff --git a/llvm/test/DebugInfo/Sparc/subreg.ll b/llvm/test/DebugInfo/Sparc/subreg.ll
index afc1a00fbb0e82..119e62ef22119e 100644
--- a/llvm/test/DebugInfo/Sparc/subreg.ll
+++ b/llvm/test/DebugInfo/Sparc/subreg.ll
@@ -9,7 +9,7 @@ target triple = "sparc64"
define void @fn1(fp128 %b) local_unnamed_addr !dbg !7 {
entry:
tail call void @llvm.dbg.value(metadata fp128 %b, i64 0, metadata !13, metadata !18), !dbg !17
- tail call void @llvm.dbg.value(metadata fp128 0xL00000000000000000000000000000000, i64 0, metadata !13, metadata !19), !dbg !17
+ tail call void @llvm.dbg.value(metadata fp128 f0x00000000000000000000000000000000, i64 0, metadata !13, metadata !19), !dbg !17
ret void, !dbg !20
}
diff --git a/llvm/test/DebugInfo/X86/float_const_loclist.ll b/llvm/test/DebugInfo/X86/float_const_loclist.ll
index 4483370c828597..1fe62d1b3cd460 100644
--- a/llvm/test/DebugInfo/X86/float_const_loclist.ll
+++ b/llvm/test/DebugInfo/X86/float_const_loclist.ll
@@ -15,7 +15,7 @@
;
; SANITY: CALL{{.*}} @barrier
; SANITY: DBG_VALUE float 0x40091EB860000000
-; SANITY: DBG_VALUE x86_fp80 0xK4000C8F5C28F5C28F800
+; SANITY: DBG_VALUE x86_fp80 f0x4000C8F5C28F5C28F800
; SANITY: TAILJMP{{.*}} @barrier
;
; CHECK: .debug_info contents:
@@ -35,7 +35,7 @@ define void @foo() #0 !dbg !4 {
entry:
tail call void (...) @barrier() #3, !dbg !16
tail call void @llvm.dbg.value(metadata float 0x40091EB860000000, metadata !8, metadata !17), !dbg !18
- tail call void @llvm.dbg.value(metadata x86_fp80 0xK4000C8F5C28F5C28F800, metadata !10, metadata !17), !dbg !19
+ tail call void @llvm.dbg.value(metadata x86_fp80 f0x4000C8F5C28F5C28F800, metadata !10, metadata !17), !dbg !19
tail call void (...) @barrier() #3, !dbg !20
ret void, !dbg !21
}
diff --git a/llvm/test/DebugInfo/X86/global-sra-fp80-array.ll b/llvm/test/DebugInfo/X86/global-sra-fp80-array.ll
index d3ab3bdcb1a42d..a01d11438f1823 100644
--- a/llvm/test/DebugInfo/X86/global-sra-fp80-array.ll
+++ b/llvm/test/DebugInfo/X86/global-sra-fp80-array.ll
@@ -21,8 +21,8 @@ target triple = "x86_64-unknown-linux-gnu"
@array = internal global [2 x x86_fp80] zeroinitializer, align 16, !dbg !0
-; CHECK: @array.0 = internal unnamed_addr global x86_fp80 0xK00000000000000000000, align 16, !dbg ![[EL0:.*]]
-; CHECK: @array.1 = internal unnamed_addr global x86_fp80 0xK00000000000000000000, align 16, !dbg ![[EL1:.*]]
+; CHECK: @array.0 = internal unnamed_addr global x86_fp80 f0x00000000000000000000, align 16, !dbg ![[EL0:.*]]
+; CHECK: @array.1 = internal unnamed_addr global x86_fp80 f0x00000000000000000000, align 16, !dbg ![[EL1:.*]]
;
; CHECK: ![[EL0]] = !DIGlobalVariableExpression(var: ![[VAR:.*]], expr: !DIExpression(DW_OP_LLVM_fragment, 0, 128))
; CHECK: ![[VAR]] = distinct !DIGlobalVariable(name: "array"
@@ -76,7 +76,7 @@ entry:
%6 = load x86_fp80, ptr @array, align 16, !dbg !29
%7 = load x86_fp80, ptr getelementptr inbounds ([2 x x86_fp80], ptr @array, i64 0, i64 1), align 16, !dbg !30
%add = fadd x86_fp80 %6, %7, !dbg !31
- %cmp = fcmp ogt x86_fp80 %add, 0xK00000000000000000000, !dbg !32
+ %cmp = fcmp ogt x86_fp80 %add, f0x00000000000000000000, !dbg !32
%conv5 = zext i1 %cmp to i32, !dbg !32
ret i32 %conv5, !dbg !33
}
diff --git a/llvm/test/DebugInfo/X86/global-sra-fp80-struct.ll b/llvm/test/DebugInfo/X86/global-sra-fp80-struct.ll
index 7adc40c5b844dc..bd5e6ab7c909a9 100644
--- a/llvm/test/DebugInfo/X86/global-sra-fp80-struct.ll
+++ b/llvm/test/DebugInfo/X86/global-sra-fp80-struct.ll
@@ -24,7 +24,7 @@ target triple = "x86_64-unknown-linux-gnu"
@static_struct = internal global %struct.mystruct zeroinitializer, align 16, !dbg !0
-; CHECK: @static_struct.0 = internal unnamed_addr global x86_fp80 0xK00000000000000000000, align 16, !dbg ![[EL0:.*]]
+; CHECK: @static_struct.0 = internal unnamed_addr global x86_fp80 f0x00000000000000000000, align 16, !dbg ![[EL0:.*]]
; CHECK: @static_struct.1 = internal unnamed_addr global i32 0, align 16, !dbg ![[EL1:.*]]
; CHECK: ![[EL0]] = !DIGlobalVariableExpression(var: ![[VAR:.*]], expr: !DIExpression(DW_OP_LLVM_fragment, 0, 128))
@@ -79,7 +79,7 @@ entry:
%7 = load i32, ptr getelementptr inbounds (%struct.mystruct, ptr @static_struct, i32 0, i32 1), align 16, !dbg !31
%conv5 = sitofp i32 %7 to x86_fp80, !dbg !32
%add = fadd x86_fp80 %6, %conv5, !dbg !33
- %cmp = fcmp ogt x86_fp80 %add, 0xK00000000000000000000, !dbg !34
+ %cmp = fcmp ogt x86_fp80 %add, f0x00000000000000000000, !dbg !34
%conv6 = zext i1 %cmp to i32, !dbg !34
ret i32 %conv6, !dbg !35
}
diff --git a/llvm/test/Instrumentation/AddressSanitizer/basic.ll b/llvm/test/Instrumentation/AddressSanitizer/basic.ll
index 1aef8c03c9d3e5..bcaf02694452ab 100644
--- a/llvm/test/Instrumentation/AddressSanitizer/basic.ll
+++ b/llvm/test/Instrumentation/AddressSanitizer/basic.ll
@@ -100,7 +100,7 @@ entry:
define void @LongDoubleTest(ptr nocapture %a) nounwind uwtable sanitize_address {
entry:
- store x86_fp80 0xK3FFF8000000000000000, ptr %a, align 16
+ store x86_fp80 f0x3FFF8000000000000000, ptr %a, align 16
ret void
}
diff --git a/llvm/test/Instrumentation/HeapProfiler/basic.ll b/llvm/test/Instrumentation/HeapProfiler/basic.ll
index 5d918f20de842b..ee30dc41e29b78 100644
--- a/llvm/test/Instrumentation/HeapProfiler/basic.ll
+++ b/llvm/test/Instrumentation/HeapProfiler/basic.ll
@@ -55,7 +55,7 @@ entry:
define void @FP80Test(ptr nocapture %a) nounwind uwtable {
entry:
- store x86_fp80 0xK3FFF8000000000000000, ptr %a, align 16
+ store x86_fp80 f0x3FFF8000000000000000, ptr %a, align 16
ret void
}
; CHECK-LABEL: @FP80Test
@@ -65,7 +65,7 @@ entry:
; CHECK-NEXT: store i64 %[[NEW_ST_SHADOW]]
; CHECK-NOT: store i64
; The actual store.
-; CHECK: store x86_fp80 0xK3FFF8000000000000000, ptr %a
+; CHECK: store x86_fp80 f0x3FFF8000000000000000, ptr %a
; CHECK: ret void
define void @i40test(ptr %a, ptr %b) nounwind uwtable {
diff --git a/llvm/test/Instrumentation/NumericalStabilitySanitizer/basic.ll b/llvm/test/Instrumentation/NumericalStabilitySanitizer/basic.ll
index 03c5e917f07650..0a947a0c15e9dc 100644
--- a/llvm/test/Instrumentation/NumericalStabilitySanitizer/basic.ll
+++ b/llvm/test/Instrumentation/NumericalStabilitySanitizer/basic.ll
@@ -9,7 +9,7 @@ declare float @declaration_only(float %a) sanitize_numerical_stability
; Tests with simple control flow.
@float_const = private unnamed_addr constant float 0.5
- at x86_fp80_const = private unnamed_addr constant x86_fp80 0xK3FC9E69594BEC44DE000
+ at x86_fp80_const = private unnamed_addr constant x86_fp80 f0x3FC9E69594BEC44DE000
@double_const = private unnamed_addr constant double 0.5
@@ -68,8 +68,8 @@ define x86_fp80 @param_add_return_x86_fp80(x86_fp80 %a) sanitize_numerical_stabi
; CHECK-NEXT: [[TMP3:%.*]] = fpext x86_fp80 [[A:%.*]] to fp128
; CHECK-NEXT: [[TMP4:%.*]] = select i1 [[TMP1]], fp128 [[TMP2]], fp128 [[TMP3]]
; CHECK-NEXT: store i64 0, ptr @__nsan_shadow_args_tag, align 8
-; CHECK-NEXT: [[B:%.*]] = fadd x86_fp80 [[A]], 0xK3FC9E69594BEC44DE000
-; CHECK-NEXT: [[TMP5:%.*]] = fadd fp128 [[TMP4]], 0xLC0000000000000003FC9CD2B297D889B
+; CHECK-NEXT: [[B:%.*]] = fadd x86_fp80 [[A]], f0x3FC9E69594BEC44DE000
+; CHECK-NEXT: [[TMP5:%.*]] = fadd fp128 [[TMP4]], f0x3FC9CD2B297D889BC000000000000000
; CHECK-NEXT: [[TMP6:%.*]] = call i32 @__nsan_internal_check_longdouble_q(x86_fp80 [[B]], fp128 [[TMP5]], i32 1, i64 0)
; CHECK-NEXT: [[TMP7:%.*]] = icmp eq i32 [[TMP6]], 1
; CHECK-NEXT: [[TMP8:%.*]] = fpext x86_fp80 [[B]] to fp128
@@ -79,7 +79,7 @@ define x86_fp80 @param_add_return_x86_fp80(x86_fp80 %a) sanitize_numerical_stabi
; CHECK-NEXT: ret x86_fp80 [[B]]
;
entry:
- %b = fadd x86_fp80 %a, 0xK3FC9E69594BEC44DE000
+ %b = fadd x86_fp80 %a, f0x3FC9E69594BEC44DE000
ret x86_fp80 %b
}
@@ -93,7 +93,7 @@ define double @param_add_return_double(double %a) sanitize_numerical_stability {
; DQQ-NEXT: [[TMP4:%.*]] = select i1 [[TMP1]], fp128 [[TMP2]], fp128 [[TMP3]]
; DQQ-NEXT: store i64 0, ptr @__nsan_shadow_args_tag, align 8
; DQQ-NEXT: [[B:%.*]] = fadd double [[A]], 1.000000e+00
-; DQQ-NEXT: [[TMP5:%.*]] = fadd fp128 [[TMP4]], 0xL00000000000000003FFF000000000000
+; DQQ-NEXT: [[TMP5:%.*]] = fadd fp128 [[TMP4]], f0x3FFF0000000000000000000000000000
; DQQ-NEXT: [[TMP6:%.*]] = call i32 @__nsan_internal_check_double_q(double [[B]], fp128 [[TMP5]], i32 1, i64 0)
; DQQ-NEXT: [[TMP7:%.*]] = icmp eq i32 [[TMP6]], 1
; DQQ-NEXT: [[TMP8:%.*]] = fpext double [[B]] to fp128
@@ -111,7 +111,7 @@ define double @param_add_return_double(double %a) sanitize_numerical_stability {
; DLQ-NEXT: [[TMP4:%.*]] = select i1 [[TMP1]], x86_fp80 [[TMP2]], x86_fp80 [[TMP3]]
; DLQ-NEXT: store i64 0, ptr @__nsan_shadow_args_tag, align 8
; DLQ-NEXT: [[B:%.*]] = fadd double [[A]], 1.000000e+00
-; DLQ-NEXT: [[TMP5:%.*]] = fadd x86_fp80 [[TMP4]], 0xK3FFF8000000000000000
+; DLQ-NEXT: [[TMP5:%.*]] = fadd x86_fp80 [[TMP4]], f0x3FFF8000000000000000
; DLQ-NEXT: [[TMP6:%.*]] = call i32 @__nsan_internal_check_double_l(double [[B]], x86_fp80 [[TMP5]], i32 1, i64 0)
; DLQ-NEXT: [[TMP7:%.*]] = icmp eq i32 [[TMP6]], 1
; DLQ-NEXT: [[TMP8:%.*]] = fpext double [[B]] to x86_fp80
@@ -194,8 +194,8 @@ define void @constantload_add_store_x86_fp80(ptr %dst) sanitize_numerical_stabil
; CHECK-NEXT: entry:
; CHECK-NEXT: [[B:%.*]] = load x86_fp80, ptr @x86_fp80_const, align 16
; CHECK-NEXT: [[TMP0:%.*]] = fpext x86_fp80 [[B]] to fp128
-; CHECK-NEXT: [[C:%.*]] = fadd x86_fp80 [[B]], 0xK3FC9E69594BEC44DE000
-; CHECK-NEXT: [[TMP1:%.*]] = fadd fp128 [[TMP0]], 0xLC0000000000000003FC9CD2B297D889B
+; CHECK-NEXT: [[C:%.*]] = fadd x86_fp80 [[B]], f0x3FC9E69594BEC44DE000
+; CHECK-NEXT: [[TMP1:%.*]] = fadd fp128 [[TMP0]], f0x3FC9CD2B297D889BC000000000000000
; CHECK-NEXT: [[TMP2:%.*]] = call ptr @__nsan_get_shadow_ptr_for_longdouble_store(ptr [[DST:%.*]], i64 1)
; CHECK-NEXT: [[TMP3:%.*]] = ptrtoint ptr [[DST]] to i64
; CHECK-NEXT: [[TMP4:%.*]] = call i32 @__nsan_internal_check_longdouble_q(x86_fp80 [[C]], fp128 [[TMP1]], i32 4, i64 [[TMP3]])
@@ -208,7 +208,7 @@ define void @constantload_add_store_x86_fp80(ptr %dst) sanitize_numerical_stabil
;
entry:
%b = load x86_fp80, ptr @x86_fp80_const
- %c = fadd x86_fp80 %b, 0xK3FC9E69594BEC44DE000
+ %c = fadd x86_fp80 %b, f0x3FC9E69594BEC44DE000
store x86_fp80 %c, ptr %dst, align 1
ret void
}
@@ -219,7 +219,7 @@ define void @constantload_add_store_double(ptr %dst) sanitize_numerical_stabilit
; DQQ-NEXT: [[B:%.*]] = load double, ptr @double_const, align 8
; DQQ-NEXT: [[TMP0:%.*]] = fpext double [[B]] to fp128
; DQQ-NEXT: [[C:%.*]] = fadd double [[B]], 1.000000e+00
-; DQQ-NEXT: [[TMP1:%.*]] = fadd fp128 [[TMP0]], 0xL00000000000000003FFF000000000000
+; DQQ-NEXT: [[TMP1:%.*]] = fadd fp128 [[TMP0]], f0x3FFF0000000000000000000000000000
; DQQ-NEXT: [[TMP2:%.*]] = call ptr @__nsan_get_shadow_ptr_for_double_store(ptr [[DST:%.*]], i64 1)
; DQQ-NEXT: [[TMP3:%.*]] = ptrtoint ptr [[DST]] to i64
; DQQ-NEXT: [[TMP4:%.*]] = call i32 @__nsan_internal_check_double_q(double [[C]], fp128 [[TMP1]], i32 4, i64 [[TMP3]])
@@ -235,7 +235,7 @@ define void @constantload_add_store_double(ptr %dst) sanitize_numerical_stabilit
; DLQ-NEXT: [[B:%.*]] = load double, ptr @double_const, align 8
; DLQ-NEXT: [[TMP0:%.*]] = fpext double [[B]] to x86_fp80
; DLQ-NEXT: [[C:%.*]] = fadd double [[B]], 1.000000e+00
-; DLQ-NEXT: [[TMP1:%.*]] = fadd x86_fp80 [[TMP0]], 0xK3FFF8000000000000000
+; DLQ-NEXT: [[TMP1:%.*]] = fadd x86_fp80 [[TMP0]], f0x3FFF8000000000000000
; DLQ-NEXT: [[TMP2:%.*]] = call ptr @__nsan_get_shadow_ptr_for_double_store(ptr [[DST:%.*]], i64 1)
; DLQ-NEXT: [[TMP3:%.*]] = ptrtoint ptr [[DST]] to i64
; DLQ-NEXT: [[TMP4:%.*]] = call i32 @__nsan_internal_check_double_l(double [[C]], x86_fp80 [[TMP1]], i32 4, i64 [[TMP3]])
@@ -302,8 +302,8 @@ define void @load_add_store_x86_fp80(ptr %a) sanitize_numerical_stability {
; CHECK-NEXT: br label [[TMP6]]
; CHECK: 6:
; CHECK-NEXT: [[TMP7:%.*]] = phi fp128 [ [[TMP3]], [[TMP2]] ], [ [[TMP5]], [[TMP4]] ]
-; CHECK-NEXT: [[C:%.*]] = fadd x86_fp80 [[B]], 0xK3FC9E69594BEC44DE000
-; CHECK-NEXT: [[TMP8:%.*]] = fadd fp128 [[TMP7]], 0xLC0000000000000003FC9CD2B297D889B
+; CHECK-NEXT: [[C:%.*]] = fadd x86_fp80 [[B]], f0x3FC9E69594BEC44DE000
+; CHECK-NEXT: [[TMP8:%.*]] = fadd fp128 [[TMP7]], f0x3FC9CD2B297D889BC000000000000000
; CHECK-NEXT: [[TMP9:%.*]] = call ptr @__nsan_get_shadow_ptr_for_longdouble_store(ptr [[A]], i64 1)
; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: [[TMP11:%.*]] = call i32 @__nsan_internal_check_longdouble_q(x86_fp80 [[C]], fp128 [[TMP8]], i32 4, i64 [[TMP10]])
@@ -316,7 +316,7 @@ define void @load_add_store_x86_fp80(ptr %a) sanitize_numerical_stability {
;
entry:
%b = load x86_fp80, ptr %a, align 1
- %c = fadd x86_fp80 %b, 0xK3FC9E69594BEC44DE000
+ %c = fadd x86_fp80 %b, f0x3FC9E69594BEC44DE000
store x86_fp80 %c, ptr %a, align 1
ret void
}
@@ -337,7 +337,7 @@ define void @load_add_store_double(ptr %a) sanitize_numerical_stability {
; DQQ: 6:
; DQQ-NEXT: [[TMP7:%.*]] = phi fp128 [ [[TMP3]], [[TMP2]] ], [ [[TMP5]], [[TMP4]] ]
; DQQ-NEXT: [[C:%.*]] = fadd double [[B]], 1.000000e+00
-; DQQ-NEXT: [[TMP8:%.*]] = fadd fp128 [[TMP7]], 0xL00000000000000003FFF000000000000
+; DQQ-NEXT: [[TMP8:%.*]] = fadd fp128 [[TMP7]], f0x3FFF0000000000000000000000000000
; DQQ-NEXT: [[TMP9:%.*]] = call ptr @__nsan_get_shadow_ptr_for_double_store(ptr [[A]], i64 1)
; DQQ-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[A]] to i64
; DQQ-NEXT: [[TMP11:%.*]] = call i32 @__nsan_internal_check_double_q(double [[C]], fp128 [[TMP8]], i32 4, i64 [[TMP10]])
@@ -363,7 +363,7 @@ define void @load_add_store_double(ptr %a) sanitize_numerical_stability {
; DLQ: 6:
; DLQ-NEXT: [[TMP7:%.*]] = phi x86_fp80 [ [[TMP3]], [[TMP2]] ], [ [[TMP5]], [[TMP4]] ]
; DLQ-NEXT: [[C:%.*]] = fadd double [[B]], 1.000000e+00
-; DLQ-NEXT: [[TMP8:%.*]] = fadd x86_fp80 [[TMP7]], 0xK3FFF8000000000000000
+; DLQ-NEXT: [[TMP8:%.*]] = fadd x86_fp80 [[TMP7]], f0x3FFF8000000000000000
; DLQ-NEXT: [[TMP9:%.*]] = call ptr @__nsan_get_shadow_ptr_for_double_store(ptr [[A]], i64 1)
; DLQ-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[A]] to i64
; DLQ-NEXT: [[TMP11:%.*]] = call i32 @__nsan_internal_check_double_l(double [[C]], x86_fp80 [[TMP8]], i32 4, i64 [[TMP10]])
@@ -481,22 +481,22 @@ define void @call_fn_taking_float() sanitize_numerical_stability {
; DQQ-NEXT: entry:
; DQQ-NEXT: store ptr @takes_floats, ptr @__nsan_shadow_args_tag, align 8
; DQQ-NEXT: store double 1.000000e+00, ptr @__nsan_shadow_args_ptr, align 1
-; DQQ-NEXT: store fp128 0xL00000000000000004000800000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 8), align 1
-; DQQ-NEXT: store fp128 0xLC0000000000000003FC9CD2B297D889B, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 24), align 1
-; DQQ-NEXT: call void @takes_floats(float 1.000000e+00, i8 2, double 3.000000e+00, x86_fp80 0xK3FC9E69594BEC44DE000)
+; DQQ-NEXT: store fp128 f0x40008000000000000000000000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 8), align 1
+; DQQ-NEXT: store fp128 f0x3FC9CD2B297D889BC000000000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 24), align 1
+; DQQ-NEXT: call void @takes_floats(float 1.000000e+00, i8 2, double 3.000000e+00, x86_fp80 f0x3FC9E69594BEC44DE000)
; DQQ-NEXT: ret void
;
; DLQ-LABEL: @call_fn_taking_float(
; DLQ-NEXT: entry:
; DLQ-NEXT: store ptr @takes_floats, ptr @__nsan_shadow_args_tag, align 8
; DLQ-NEXT: store double 1.000000e+00, ptr @__nsan_shadow_args_ptr, align 1
-; DLQ-NEXT: store x86_fp80 0xK4000C000000000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 8), align 1
-; DLQ-NEXT: store fp128 0xLC0000000000000003FC9CD2B297D889B, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 18), align 1
-; DLQ-NEXT: call void @takes_floats(float 1.000000e+00, i8 2, double 3.000000e+00, x86_fp80 0xK3FC9E69594BEC44DE000)
+; DLQ-NEXT: store x86_fp80 f0x4000C000000000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 8), align 1
+; DLQ-NEXT: store fp128 f0x3FC9CD2B297D889BC000000000000000, ptr getelementptr ([16384 x i8], ptr @__nsan_shadow_args_ptr, i64 0, i64 18), align 1
+; DLQ-NEXT: call void @takes_floats(float 1.000000e+00, i8 2, double 3.000000e+00, x86_fp80 f0x3FC9E69594BEC44DE000)
; DLQ-NEXT: ret void
;
entry:
- call void @takes_floats(float 1.0, i8 2, double 3.0, x86_fp80 0xK3FC9E69594BEC44DE000)
+ call void @takes_floats(float 1.0, i8 2, double 3.0, x86_fp80 f0x3FC9E69594BEC44DE000)
ret void
}
@@ -547,7 +547,7 @@ define double @call_sin_libfunc() sanitize_numerical_stability {
; DQQ-LABEL: @call_sin_libfunc(
; DQQ-NEXT: entry:
; DQQ-NEXT: [[R:%.*]] = call double @sin(double 1.000000e+00) #[[ATTR4]]
-; DQQ-NEXT: [[TMP0:%.*]] = call x86_fp80 @llvm.sin.f80(x86_fp80 0xK3FFF8000000000000000)
+; DQQ-NEXT: [[TMP0:%.*]] = call x86_fp80 @llvm.sin.f80(x86_fp80 f0x3FFF8000000000000000)
; DQQ-NEXT: [[TMP1:%.*]] = fpext x86_fp80 [[TMP0]] to fp128
; DQQ-NEXT: [[TMP2:%.*]] = call i32 @__nsan_internal_check_double_q(double [[R]], fp128 [[TMP1]], i32 1, i64 0)
; DQQ-NEXT: [[TMP3:%.*]] = icmp eq i32 [[TMP2]], 1
@@ -560,7 +560,7 @@ define double @call_sin_libfunc() sanitize_numerical_stability {
; DLQ-LABEL: @call_sin_libfunc(
; DLQ-NEXT: entry:
; DLQ-NEXT: [[R:%.*]] = call double @sin(double 1.000000e+00) #[[ATTR4]]
-; DLQ-NEXT: [[TMP0:%.*]] = call x86_fp80 @llvm.sin.f80(x86_fp80 0xK3FFF8000000000000000)
+; DLQ-NEXT: [[TMP0:%.*]] = call x86_fp80 @llvm.sin.f80(x86_fp80 f0x3FFF8000000000000000)
; DLQ-NEXT: [[TMP1:%.*]] = call i32 @__nsan_internal_check_double_l(double [[R]], x86_fp80 [[TMP0]], i32 1, i64 0)
; DLQ-NEXT: [[TMP2:%.*]] = icmp eq i32 [[TMP1]], 1
; DLQ-NEXT: [[TMP3:%.*]] = fpext double [[R]] to x86_fp80
diff --git a/llvm/test/Transforms/Attributor/IPConstantProp/fp-bc-icmp-const-fold.ll b/llvm/test/Transforms/Attributor/IPConstantProp/fp-bc-icmp-const-fold.ll
index c172eb2975c7df..707796861ed73c 100644
--- a/llvm/test/Transforms/Attributor/IPConstantProp/fp-bc-icmp-const-fold.ll
+++ b/llvm/test/Transforms/Attributor/IPConstantProp/fp-bc-icmp-const-fold.ll
@@ -28,7 +28,7 @@ define void @test(i32 signext %n, i1 %arg) {
; CHECK: if.else14:
; CHECK-NEXT: br label [[DO_BODY:%.*]]
; CHECK: do.body:
-; CHECK-NEXT: [[SCALE_0:%.*]] = phi ppc_fp128 [ 0xM3FF00000000000000000000000000000, [[IF_ELSE14]] ], [ [[SCALE_0]], [[DO_BODY]] ]
+; CHECK-NEXT: [[SCALE_0:%.*]] = phi ppc_fp128 [ f0x00000000000000003FF0000000000000, [[IF_ELSE14]] ], [ [[SCALE_0]], [[DO_BODY]] ]
; CHECK-NEXT: br i1 [[ARG]], label [[DO_BODY]], label [[IF_THEN33:%.*]]
; CHECK: if.then33:
; CHECK-NEXT: br i1 [[ARG]], label [[_ZN5BOOST4MATH4SIGNIGEEIRKT__EXIT30:%.*]], label [[COND_FALSE_I28:%.*]]
@@ -68,7 +68,7 @@ if.else14: ; preds = %if.end4
br label %do.body
do.body: ; preds = %do.body, %if.else14
- %scale.0 = phi ppc_fp128 [ 0xM3FF00000000000000000000000000000, %if.else14 ], [ %scale.0, %do.body ]
+ %scale.0 = phi ppc_fp128 [ f0x00000000000000003FF0000000000000, %if.else14 ], [ %scale.0, %do.body ]
br i1 %arg, label %do.body, label %if.then33
if.then33: ; preds = %do.body
diff --git a/llvm/test/Transforms/Attributor/nofpclass.ll b/llvm/test/Transforms/Attributor/nofpclass.ll
index b97454a29d5135..52c763f280b386 100644
--- a/llvm/test/Transforms/Attributor/nofpclass.ll
+++ b/llvm/test/Transforms/Attributor/nofpclass.ll
@@ -514,14 +514,14 @@ define half @fcmp_assume_issubnormal_callsite_arg_return(half %arg) {
; CHECK-SAME: (half returned [[ARG:%.*]]) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[FABS:%.*]] = call nofpclass(ninf nzero nsub nnorm) half @llvm.fabs.f16(half [[ARG]]) #[[ATTR20:[0-9]+]]
-; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], 0xH0400
+; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], f0x0400
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[IS_SUBNORMAL]]) #[[ATTR18]]
; CHECK-NEXT: call void @extern.use.f16(half [[ARG]])
; CHECK-NEXT: ret half [[ARG]]
;
entry:
%fabs = call half @llvm.fabs.f16(half %arg)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
call void @llvm.assume(i1 %is.subnormal)
call void @extern.use.f16(half %arg)
ret half %arg
@@ -533,13 +533,13 @@ define half @fcmp_assume_not_inf_after_call(half %arg) {
; CHECK-SAME: (half returned [[ARG:%.*]]) {
; CHECK-NEXT: entry:
; CHECK-NEXT: call void @extern.use.f16(half [[ARG]])
-; CHECK-NEXT: [[NOT_INF:%.*]] = fcmp oeq half [[ARG]], 0xH7C00
+; CHECK-NEXT: [[NOT_INF:%.*]] = fcmp oeq half [[ARG]], f0x7C00
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[NOT_INF]])
; CHECK-NEXT: ret half [[ARG]]
;
entry:
call void @extern.use.f16(half %arg)
- %not.inf = fcmp oeq half %arg, 0xH7C00
+ %not.inf = fcmp oeq half %arg, f0x7C00
call void @llvm.assume(i1 %not.inf)
ret half %arg
}
@@ -550,19 +550,19 @@ define half @fcmp_assume2_callsite_arg_return(half %arg) {
; CHECK-SAME: (half returned [[ARG:%.*]]) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[FABS:%.*]] = call nofpclass(ninf nzero nsub nnorm) half @llvm.fabs.f16(half [[ARG]]) #[[ATTR20]]
-; CHECK-NEXT: [[NOT_SUBNORMAL_OR_ZERO:%.*]] = fcmp oge half [[FABS]], 0xH0400
+; CHECK-NEXT: [[NOT_SUBNORMAL_OR_ZERO:%.*]] = fcmp oge half [[FABS]], f0x0400
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[NOT_SUBNORMAL_OR_ZERO]]) #[[ATTR18]]
-; CHECK-NEXT: [[NOT_INF:%.*]] = fcmp one half [[ARG]], 0xH7C00
+; CHECK-NEXT: [[NOT_INF:%.*]] = fcmp one half [[ARG]], f0x7C00
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[NOT_INF]]) #[[ATTR18]]
; CHECK-NEXT: call void @extern.use.f16(half [[ARG]])
; CHECK-NEXT: ret half [[ARG]]
;
entry:
%fabs = call half @llvm.fabs.f16(half %arg)
- %not.subnormal.or.zero = fcmp oge half %fabs, 0xH0400
+ %not.subnormal.or.zero = fcmp oge half %fabs, f0x0400
call void @llvm.assume(i1 %not.subnormal.or.zero)
- %not.inf = fcmp one half %arg, 0xH7C00
+ %not.inf = fcmp one half %arg, f0x7C00
call void @llvm.assume(i1 %not.inf)
call void @extern.use.f16(half %arg)
@@ -592,9 +592,9 @@ define half @assume_fcmp_fabs_with_other_fabs_assume(half %arg) {
; CHECK-SAME: (half returned [[ARG:%.*]]) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[FABS:%.*]] = call nofpclass(ninf nzero nsub nnorm) half @llvm.fabs.f16(half [[ARG]]) #[[ATTR20]]
-; CHECK-NEXT: [[UNRELATED_FABS:%.*]] = fcmp one half [[FABS]], 0xH0000
+; CHECK-NEXT: [[UNRELATED_FABS:%.*]] = fcmp one half [[FABS]], f0x0000
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[UNRELATED_FABS]]) #[[ATTR18]]
-; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], 0xH0400
+; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], f0x0400
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[IS_SUBNORMAL]]) #[[ATTR18]]
; CHECK-NEXT: call void @extern.use.f16(half [[ARG]])
; CHECK-NEXT: call void @extern.use.f16(half nofpclass(ninf nzero nsub nnorm) [[FABS]])
@@ -605,7 +605,7 @@ entry:
%fabs = call half @llvm.fabs.f16(half %arg)
%unrelated.fabs = fcmp one half %fabs, 0.0
call void @llvm.assume(i1 %unrelated.fabs)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
call void @llvm.assume(i1 %is.subnormal)
call void @extern.use.f16(half %arg)
call void @extern.use.f16(half %fabs)
@@ -620,7 +620,7 @@ define half @assume_fcmp_fabs_with_other_fabs_assume_fallback(half %arg) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[FABS:%.*]] = call nofpclass(ninf nzero nsub nnorm) half @llvm.fabs.f16(half [[ARG]]) #[[ATTR20]]
; CHECK-NEXT: call void @llvm.assume(i1 noundef true) #[[ATTR18]]
-; CHECK-NEXT: [[UNRELATED_FABS:%.*]] = fcmp oeq half [[FABS]], 0xH0000
+; CHECK-NEXT: [[UNRELATED_FABS:%.*]] = fcmp oeq half [[FABS]], f0x0000
; CHECK-NEXT: call void @llvm.assume(i1 noundef [[UNRELATED_FABS]]) #[[ATTR18]]
; CHECK-NEXT: call void @llvm.assume(i1 noundef true) #[[ATTR18]]
; CHECK-NEXT: call void @extern.use.f16(half [[ARG]])
@@ -631,13 +631,13 @@ entry:
%fabs = call half @llvm.fabs.f16(half %arg)
- %one.inf = fcmp one half %arg, 0xH7C00
+ %one.inf = fcmp one half %arg, f0x7C00
call void @llvm.assume(i1 %one.inf)
%unrelated.fabs = fcmp oeq half %fabs, 0.0
call void @llvm.assume(i1 %unrelated.fabs)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
call void @llvm.assume(i1 %is.subnormal)
call void @extern.use.f16(half %arg)
call void @extern.use.f16(half %fabs)
diff --git a/llvm/test/Transforms/CodeGenPrepare/AArch64/fpclass-test.ll b/llvm/test/Transforms/CodeGenPrepare/AArch64/fpclass-test.ll
index 63ab22e96ad2ad..021c510a2129e9 100644
--- a/llvm/test/Transforms/CodeGenPrepare/AArch64/fpclass-test.ll
+++ b/llvm/test/Transforms/CodeGenPrepare/AArch64/fpclass-test.ll
@@ -96,7 +96,7 @@ define i1 @test_fp128_is_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp ueq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp ueq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -107,7 +107,7 @@ define i1 @test_fp128_is_not_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp one fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp one fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -118,7 +118,7 @@ define i1 @test_fp128_is_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp oeq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp oeq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -129,6 +129,6 @@ define i1 @test_fp128_is_not_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp une fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp une fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
diff --git a/llvm/test/Transforms/CodeGenPrepare/RISCV/fpclass-test.ll b/llvm/test/Transforms/CodeGenPrepare/RISCV/fpclass-test.ll
index 7c00218bdcce3d..c6843b222755fa 100644
--- a/llvm/test/Transforms/CodeGenPrepare/RISCV/fpclass-test.ll
+++ b/llvm/test/Transforms/CodeGenPrepare/RISCV/fpclass-test.ll
@@ -96,7 +96,7 @@ define i1 @test_fp128_is_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp ueq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp ueq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -107,7 +107,7 @@ define i1 @test_fp128_is_not_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp one fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp one fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -118,7 +118,7 @@ define i1 @test_fp128_is_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp oeq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp oeq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -129,6 +129,6 @@ define i1 @test_fp128_is_not_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp une fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp une fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
diff --git a/llvm/test/Transforms/CodeGenPrepare/X86/fpclass-test.ll b/llvm/test/Transforms/CodeGenPrepare/X86/fpclass-test.ll
index 525caeb3e79a10..f11da693f9d83e 100644
--- a/llvm/test/Transforms/CodeGenPrepare/X86/fpclass-test.ll
+++ b/llvm/test/Transforms/CodeGenPrepare/X86/fpclass-test.ll
@@ -96,7 +96,7 @@ define i1 @test_fp128_is_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp ueq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp ueq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -107,7 +107,7 @@ define i1 @test_fp128_is_not_inf_or_nan(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp one fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp one fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -118,7 +118,7 @@ define i1 @test_fp128_is_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp oeq fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp oeq fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -129,7 +129,7 @@ define i1 @test_fp128_is_not_inf(fp128 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call fp128 @llvm.fabs.f128(fp128 %arg)
- %ret = fcmp une fp128 %abs, 0xL00000000000000007FFF000000000000
+ %ret = fcmp une fp128 %abs, f0x7FFF0000000000000000000000000000
ret i1 %ret
}
@@ -140,7 +140,7 @@ define i1 @test_x86_fp80_is_inf_or_nan(x86_fp80 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call x86_fp80 @llvm.fabs.f80(x86_fp80 %arg)
- %ret = fcmp ueq x86_fp80 %abs, 0xK7FFF8000000000000000
+ %ret = fcmp ueq x86_fp80 %abs, f0x7FFF8000000000000000
ret i1 %ret
}
@@ -151,7 +151,7 @@ define i1 @test_x86_fp80_is_not_inf_or_nan(x86_fp80 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call x86_fp80 @llvm.fabs.f80(x86_fp80 %arg)
- %ret = fcmp one x86_fp80 %abs, 0xK7FFF8000000000000000
+ %ret = fcmp one x86_fp80 %abs, f0x7FFF8000000000000000
ret i1 %ret
}
@@ -162,7 +162,7 @@ define i1 @test_x86_fp80_is_inf(x86_fp80 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call x86_fp80 @llvm.fabs.f80(x86_fp80 %arg)
- %ret = fcmp oeq x86_fp80 %abs, 0xK7FFF8000000000000000
+ %ret = fcmp oeq x86_fp80 %abs, f0x7FFF8000000000000000
ret i1 %ret
}
@@ -173,6 +173,6 @@ define i1 @test_x86_fp80_is_not_inf(x86_fp80 %arg) {
; CHECK-NEXT: ret i1 [[TMP1]]
;
%abs = tail call x86_fp80 @llvm.fabs.f80(x86_fp80 %arg)
- %ret = fcmp une x86_fp80 %abs, 0xK7FFF8000000000000000
+ %ret = fcmp une x86_fp80 %abs, f0x7FFF8000000000000000
ret i1 %ret
}
diff --git a/llvm/test/Transforms/EarlyCSE/atan.ll b/llvm/test/Transforms/EarlyCSE/atan.ll
index 2b7206c0a6aab6..2dc96dfeecc794 100644
--- a/llvm/test/Transforms/EarlyCSE/atan.ll
+++ b/llvm/test/Transforms/EarlyCSE/atan.ll
@@ -43,10 +43,10 @@ define float @callatanDenorm() {
; TODO: long double calls currently not folded
define x86_fp80 @atanl_x86(x86_fp80 %x) {
; CHECK-LABEL: @atanl_x86(
-; CHECK-NEXT: [[CALL:%.*]] = call x86_fp80 @atanl(x86_fp80 noundef 0xK3FFF8CCCCCCCCCCCCCCD)
+; CHECK-NEXT: [[CALL:%.*]] = call x86_fp80 @atanl(x86_fp80 noundef f0x3FFF8CCCCCCCCCCCCCCD)
; CHECK-NEXT: ret x86_fp80 [[CALL]]
;
- %call = call x86_fp80 @atanl(x86_fp80 noundef 0xK3FFF8CCCCCCCCCCCCCCD)
+ %call = call x86_fp80 @atanl(x86_fp80 noundef f0x3FFF8CCCCCCCCCCCCCCD)
ret x86_fp80 %call
}
diff --git a/llvm/test/Transforms/EarlyCSE/math-2.ll b/llvm/test/Transforms/EarlyCSE/math-2.ll
index 0d55165e3662fa..069a856a833315 100644
--- a/llvm/test/Transforms/EarlyCSE/math-2.ll
+++ b/llvm/test/Transforms/EarlyCSE/math-2.ll
@@ -102,9 +102,9 @@ define double @i_powi() {
define half @pr98665() {
; CHECK-LABEL: @pr98665(
-; CHECK-NEXT: ret half 0xH3C00
+; CHECK-NEXT: ret half f0x3C00
;
- %x = call half @llvm.powi.f16.i32(half 0xH3C00, i32 1)
+ %x = call half @llvm.powi.f16.i32(half f0x3C00, i32 1)
ret half %x
}
diff --git a/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-si129tofp.ll b/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-si129tofp.ll
index f70ce2f85f65bd..f84f0dc5d7bf5e 100644
--- a/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-si129tofp.ll
+++ b/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-si129tofp.ll
@@ -80,7 +80,7 @@ define half @si129tohalf(i129 %a) {
; CHECK-NEXT: [[TMP52:%.*]] = fptrunc float [[TMP51]] to half
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP53:%.*]] = phi half [ [[TMP52]], [[ITOFP_IF_END26]] ], [ 0xH0000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP53:%.*]] = phi half [ [[TMP52]], [[ITOFP_IF_END26]] ], [ f0x0000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret half [[TMP53]]
;
%conv = sitofp i129 %a to half
@@ -337,7 +337,7 @@ define x86_fp80 @si129tox86_fp80(i129 %a) {
; CHECK-NEXT: [[TMP50:%.*]] = fptrunc fp128 [[TMP49]] to x86_fp80
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP51:%.*]] = phi x86_fp80 [ [[TMP50]], [[ITOFP_IF_END26]] ], [ 0xK00000000000000000000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP51:%.*]] = phi x86_fp80 [ [[TMP50]], [[ITOFP_IF_END26]] ], [ f0x00000000000000000000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret x86_fp80 [[TMP51]]
;
%conv = sitofp i129 %a to x86_fp80
@@ -420,7 +420,7 @@ define fp128 @si129tofp128(i129 %a) {
; CHECK-NEXT: [[TMP49:%.*]] = bitcast i128 [[TMP48]] to fp128
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP50:%.*]] = phi fp128 [ [[TMP49]], [[ITOFP_IF_END26]] ], [ 0xL00000000000000000000000000000000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP50:%.*]] = phi fp128 [ [[TMP49]], [[ITOFP_IF_END26]] ], [ f0x00000000000000000000000000000000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret fp128 [[TMP50]]
;
%conv = sitofp i129 %a to fp128
diff --git a/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-ui129tofp.ll b/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-ui129tofp.ll
index ee54d53e9ba03a..77395cd5d6d9cc 100644
--- a/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-ui129tofp.ll
+++ b/llvm/test/Transforms/ExpandLargeFpConvert/X86/expand-large-fp-convert-ui129tofp.ll
@@ -80,7 +80,7 @@ define half @ui129tohalf(i129 %a) {
; CHECK-NEXT: [[TMP52:%.*]] = fptrunc float [[TMP51]] to half
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP53:%.*]] = phi half [ [[TMP52]], [[ITOFP_IF_END26]] ], [ 0xH0000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP53:%.*]] = phi half [ [[TMP52]], [[ITOFP_IF_END26]] ], [ f0x0000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret half [[TMP53]]
;
%conv = uitofp i129 %a to half
@@ -337,7 +337,7 @@ define x86_fp80 @ui129tox86_fp80(i129 %a) {
; CHECK-NEXT: [[TMP50:%.*]] = fptrunc fp128 [[TMP49]] to x86_fp80
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP51:%.*]] = phi x86_fp80 [ [[TMP50]], [[ITOFP_IF_END26]] ], [ 0xK00000000000000000000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP51:%.*]] = phi x86_fp80 [ [[TMP50]], [[ITOFP_IF_END26]] ], [ f0x00000000000000000000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret x86_fp80 [[TMP51]]
;
%conv = uitofp i129 %a to x86_fp80
@@ -420,7 +420,7 @@ define fp128 @ui129tofp128(i129 %a) {
; CHECK-NEXT: [[TMP49:%.*]] = bitcast i128 [[TMP48]] to fp128
; CHECK-NEXT: br label [[ITOFP_RETURN]]
; CHECK: itofp-return:
-; CHECK-NEXT: [[TMP50:%.*]] = phi fp128 [ [[TMP49]], [[ITOFP_IF_END26]] ], [ 0xL00000000000000000000000000000000, [[ITOFP_ENTRY:%.*]] ]
+; CHECK-NEXT: [[TMP50:%.*]] = phi fp128 [ [[TMP49]], [[ITOFP_IF_END26]] ], [ f0x00000000000000000000000000000000, [[ITOFP_ENTRY:%.*]] ]
; CHECK-NEXT: ret fp128 [[TMP50]]
;
%conv = uitofp i129 %a to fp128
diff --git a/llvm/test/Transforms/IndVarSimplify/2008-11-25-APFloatAssert.ll b/llvm/test/Transforms/IndVarSimplify/2008-11-25-APFloatAssert.ll
index b734a47f874d0e..5592880d42287c 100644
--- a/llvm/test/Transforms/IndVarSimplify/2008-11-25-APFloatAssert.ll
+++ b/llvm/test/Transforms/IndVarSimplify/2008-11-25-APFloatAssert.ll
@@ -5,7 +5,7 @@ entry:
br label %bb23.i91
bb23.i91: ; preds = %bb23.i91, %entry
- %result.0.i89 = phi ppc_fp128 [ 0xM00000000000000000000000000000000, %entry ], [ %0, %bb23.i91 ] ; <ppc_fp128> [#uses=2]
+ %result.0.i89 = phi ppc_fp128 [ f0x00000000000000000000000000000000, %entry ], [ %0, %bb23.i91 ] ; <ppc_fp128> [#uses=2]
%0 = fmul ppc_fp128 %result.0.i89, %result.0.i89 ; <ppc_fp128> [#uses=1]
br label %bb23.i91
}
diff --git a/llvm/test/Transforms/Inline/simplify-fp128.ll b/llvm/test/Transforms/Inline/simplify-fp128.ll
index 73e63702cefcba..57f357e388c99f 100644
--- a/llvm/test/Transforms/Inline/simplify-fp128.ll
+++ b/llvm/test/Transforms/Inline/simplify-fp128.ll
@@ -4,7 +4,7 @@
define void @fli() {
; CHECK-LABEL: define void @fli() {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = call fp128 @llvm.floor.f128(fp128 0xL999999999999999A4001199999999999)
+; CHECK-NEXT: [[TMP0:%.*]] = call fp128 @llvm.floor.f128(fp128 f0x4001199999999999999999999999999A)
; CHECK-NEXT: ret void
;
entry:
@@ -15,10 +15,10 @@ entry:
define void @sc() {
; CHECK-LABEL: define void @sc() {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = tail call fp128 @llvm.floor.f128(fp128 0xL999999999999999A4001199999999999)
+; CHECK-NEXT: [[TMP0:%.*]] = tail call fp128 @llvm.floor.f128(fp128 f0x4001199999999999999999999999999A)
; CHECK-NEXT: ret void
;
entry:
- %0 = tail call fp128 @llvm.floor.f128(fp128 0xL999999999999999A4001199999999999)
+ %0 = tail call fp128 @llvm.floor.f128(fp128 f0x4001199999999999999999999999999A)
ret void
}
diff --git a/llvm/test/Transforms/InstCombine/2008-02-28-OrFCmpCrash.ll b/llvm/test/Transforms/InstCombine/2008-02-28-OrFCmpCrash.ll
index f151605627a685..a38788827cdb04 100644
--- a/llvm/test/Transforms/InstCombine/2008-02-28-OrFCmpCrash.ll
+++ b/llvm/test/Transforms/InstCombine/2008-02-28-OrFCmpCrash.ll
@@ -6,7 +6,7 @@
define float @test(float %x, x86_fp80 %y) nounwind readonly {
; CHECK-LABEL: @test(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP67:%.*]] = fcmp uno x86_fp80 [[Y:%.*]], 0xK00000000000000000000
+; CHECK-NEXT: [[TMP67:%.*]] = fcmp uno x86_fp80 [[Y:%.*]], f0x00000000000000000000
; CHECK-NEXT: [[TMP71:%.*]] = fcmp uno float [[X:%.*]], 0.000000e+00
; CHECK-NEXT: [[BOTHCOND:%.*]] = or i1 [[TMP67]], [[TMP71]]
; CHECK-NEXT: br i1 [[BOTHCOND]], label [[BB74:%.*]], label [[BB80:%.*]]
@@ -16,7 +16,7 @@ define float @test(float %x, x86_fp80 %y) nounwind readonly {
; CHECK-NEXT: ret float 0.000000e+00
;
entry:
- %tmp67 = fcmp uno x86_fp80 %y, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %tmp67 = fcmp uno x86_fp80 %y, f0x00000000000000000000 ; <i1> [#uses=1]
%tmp71 = fcmp uno float %x, 0.000000e+00 ; <i1> [#uses=1]
%bothcond = or i1 %tmp67, %tmp71 ; <i1> [#uses=1]
br i1 %bothcond, label %bb74, label %bb80
@@ -31,7 +31,7 @@ bb80: ; preds = %entry
define float @test_logical(float %x, x86_fp80 %y) nounwind readonly {
; CHECK-LABEL: @test_logical(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP67:%.*]] = fcmp uno x86_fp80 [[Y:%.*]], 0xK00000000000000000000
+; CHECK-NEXT: [[TMP67:%.*]] = fcmp uno x86_fp80 [[Y:%.*]], f0x00000000000000000000
; CHECK-NEXT: [[TMP71:%.*]] = fcmp uno float [[X:%.*]], 0.000000e+00
; CHECK-NEXT: [[BOTHCOND:%.*]] = select i1 [[TMP67]], i1 true, i1 [[TMP71]]
; CHECK-NEXT: br i1 [[BOTHCOND]], label [[BB74:%.*]], label [[BB80:%.*]]
@@ -41,7 +41,7 @@ define float @test_logical(float %x, x86_fp80 %y) nounwind readonly {
; CHECK-NEXT: ret float 0.000000e+00
;
entry:
- %tmp67 = fcmp uno x86_fp80 %y, 0xK00000000000000000000 ; <i1> [#uses=1]
+ %tmp67 = fcmp uno x86_fp80 %y, f0x00000000000000000000 ; <i1> [#uses=1]
%tmp71 = fcmp uno float %x, 0.000000e+00 ; <i1> [#uses=1]
%bothcond = select i1 %tmp67, i1 true, i1 %tmp71 ; <i1> [#uses=1]
br i1 %bothcond, label %bb74, label %bb80
diff --git a/llvm/test/Transforms/InstCombine/2009-02-04-FPBitcast.ll b/llvm/test/Transforms/InstCombine/2009-02-04-FPBitcast.ll
index 38e5f2f909ad9f..23171ca615267a 100644
--- a/llvm/test/Transforms/InstCombine/2009-02-04-FPBitcast.ll
+++ b/llvm/test/Transforms/InstCombine/2009-02-04-FPBitcast.ll
@@ -7,6 +7,6 @@ define x86_fp80 @cast() {
}
define i80 @invcast() {
- %tmp = bitcast x86_fp80 0xK00000000000000000000 to i80 ; <i80> [#uses=1]
+ %tmp = bitcast x86_fp80 f0x00000000000000000000 to i80 ; <i80> [#uses=1]
ret i80 %tmp
}
diff --git a/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul-idempotency.ll b/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul-idempotency.ll
index 6f8d8f23e3ebef..d8f0a090b292c8 100644
--- a/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul-idempotency.ll
+++ b/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul-idempotency.ll
@@ -61,7 +61,7 @@ define <vscale x 8 x half> @idempotent_fmul_two_dups(<vscale x 8 x i1> %pg, <vsc
; together is sane.
; CHECK-LABEL: define <vscale x 8 x half> @idempotent_fmul_two_dups(
; CHECK-SAME: <vscale x 8 x i1> [[PG:%.*]], <vscale x 8 x half> [[A:%.*]]) #[[ATTR0]] {
-; CHECK-NEXT: ret <vscale x 8 x half> splat (half 0xH3C00)
+; CHECK-NEXT: ret <vscale x 8 x half> splat (half f0x3C00)
;
%1 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 1.0)
%2 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 1.0)
@@ -73,7 +73,7 @@ define <vscale x 8 x half> @idempotent_fmul_two_dups(<vscale x 8 x i1> %pg, <vsc
define <vscale x 8 x half> @non_idempotent_fmul_f16(<vscale x 8 x i1> %pg, <vscale x 8 x half> %a) #0 {
; CHECK-LABEL: define <vscale x 8 x half> @non_idempotent_fmul_f16(
; CHECK-SAME: <vscale x 8 x i1> [[PG:%.*]], <vscale x 8 x half> [[A:%.*]]) #[[ATTR0]] {
-; CHECK-NEXT: [[TMP1:%.*]] = call <vscale x 8 x half> @llvm.aarch64.sve.fmul.nxv8f16(<vscale x 8 x i1> [[PG]], <vscale x 8 x half> [[A]], <vscale x 8 x half> splat (half 0xH4000))
+; CHECK-NEXT: [[TMP1:%.*]] = call <vscale x 8 x half> @llvm.aarch64.sve.fmul.nxv8f16(<vscale x 8 x i1> [[PG]], <vscale x 8 x half> [[A]], <vscale x 8 x half> splat (half f0x4000))
; CHECK-NEXT: ret <vscale x 8 x half> [[TMP1]]
;
%1 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 2.0)
diff --git a/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul_u-idempotency.ll b/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul_u-idempotency.ll
index 8278838abb4242..f64697be88b23a 100644
--- a/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul_u-idempotency.ll
+++ b/llvm/test/Transforms/InstCombine/AArch64/sve-intrinsic-fmul_u-idempotency.ll
@@ -61,7 +61,7 @@ define <vscale x 8 x half> @idempotent_fmul_u_two_dups(<vscale x 8 x i1> %pg, <v
; together is sane.
; CHECK-LABEL: define <vscale x 8 x half> @idempotent_fmul_u_two_dups(
; CHECK-SAME: <vscale x 8 x i1> [[PG:%.*]], <vscale x 8 x half> [[A:%.*]]) #[[ATTR0]] {
-; CHECK-NEXT: ret <vscale x 8 x half> splat (half 0xH3C00)
+; CHECK-NEXT: ret <vscale x 8 x half> splat (half f0x3C00)
;
%1 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 1.0)
%2 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 1.0)
@@ -73,7 +73,7 @@ define <vscale x 8 x half> @idempotent_fmul_u_two_dups(<vscale x 8 x i1> %pg, <v
define <vscale x 8 x half> @non_idempotent_fmul_u_f16(<vscale x 8 x i1> %pg, <vscale x 8 x half> %a) #0 {
; CHECK-LABEL: define <vscale x 8 x half> @non_idempotent_fmul_u_f16(
; CHECK-SAME: <vscale x 8 x i1> [[PG:%.*]], <vscale x 8 x half> [[A:%.*]]) #[[ATTR0]] {
-; CHECK-NEXT: [[TMP1:%.*]] = call <vscale x 8 x half> @llvm.aarch64.sve.fmul.u.nxv8f16(<vscale x 8 x i1> [[PG]], <vscale x 8 x half> [[A]], <vscale x 8 x half> splat (half 0xH4000))
+; CHECK-NEXT: [[TMP1:%.*]] = call <vscale x 8 x half> @llvm.aarch64.sve.fmul.u.nxv8f16(<vscale x 8 x i1> [[PG]], <vscale x 8 x half> [[A]], <vscale x 8 x half> splat (half f0x4000))
; CHECK-NEXT: ret <vscale x 8 x half> [[TMP1]]
;
%1 = call <vscale x 8 x half> @llvm.aarch64.sve.dup.x.nxv8f16(half 2.0)
diff --git a/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll b/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
index 5fdb918c875459..d7af3641981d2e 100644
--- a/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
+++ b/llvm/test/Transforms/InstCombine/AMDGPU/amdgcn-intrinsics.ll
@@ -83,7 +83,7 @@ declare double @llvm.amdgcn.sqrt.f64(double) nounwind readnone
define half @test_constant_fold_sqrt_f16_undef() nounwind {
; CHECK-LABEL: @test_constant_fold_sqrt_f16_undef(
-; CHECK-NEXT: ret half 0xH7E00
+; CHECK-NEXT: ret half f0x7E00
;
%val = call half @llvm.amdgcn.sqrt.f16(half undef) nounwind readnone
ret half %val
@@ -107,7 +107,7 @@ define double @test_constant_fold_sqrt_f64_undef() nounwind {
define half @test_constant_fold_sqrt_f16_0() nounwind {
; CHECK-LABEL: @test_constant_fold_sqrt_f16_0(
-; CHECK-NEXT: ret half 0xH0000
+; CHECK-NEXT: ret half f0x0000
;
%val = call half @llvm.amdgcn.sqrt.f16(half 0.0) nounwind readnone
ret half %val
@@ -133,7 +133,7 @@ define double @test_constant_fold_sqrt_f64_0() nounwind {
define half @test_constant_fold_sqrt_f16_neg0() nounwind {
; CHECK-LABEL: @test_constant_fold_sqrt_f16_neg0(
-; CHECK-NEXT: ret half 0xH8000
+; CHECK-NEXT: ret half f0x8000
;
%val = call half @llvm.amdgcn.sqrt.f16(half -0.0) nounwind readnone
ret half %val
@@ -1146,7 +1146,7 @@ define <2 x half> @constant_splat0_cvt_pkrtz() {
define <2 x half> @constant_cvt_pkrtz() {
; CHECK-LABEL: @constant_cvt_pkrtz(
-; CHECK-NEXT: ret <2 x half> <half 0xH4000, half 0xH4400>
+; CHECK-NEXT: ret <2 x half> <half f0x4000, half f0x4400>
;
%cvt = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float 2.0, float 4.0)
ret <2 x half> %cvt
@@ -1155,7 +1155,7 @@ define <2 x half> @constant_cvt_pkrtz() {
; Test constant values where rtz changes result
define <2 x half> @constant_rtz_pkrtz() {
; CHECK-LABEL: @constant_rtz_pkrtz(
-; CHECK-NEXT: ret <2 x half> splat (half 0xH7BFF)
+; CHECK-NEXT: ret <2 x half> splat (half f0x7BFF)
;
%cvt = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float 65535.0, float 65535.0)
ret <2 x half> %cvt
@@ -1163,7 +1163,7 @@ define <2 x half> @constant_rtz_pkrtz() {
define <2 x half> @fpext_const_cvt_pkrtz(half %x) {
; CHECK-LABEL: @fpext_const_cvt_pkrtz(
-; CHECK-NEXT: [[CVT:%.*]] = insertelement <2 x half> <half poison, half 0xH4200>, half [[X:%.*]], i64 0
+; CHECK-NEXT: [[CVT:%.*]] = insertelement <2 x half> <half poison, half f0x4200>, half [[X:%.*]], i64 0
; CHECK-NEXT: ret <2 x half> [[CVT]]
;
%ext = fpext half %x to float
@@ -1173,7 +1173,7 @@ define <2 x half> @fpext_const_cvt_pkrtz(half %x) {
define <2 x half> @const_fpext_cvt_pkrtz(half %y) {
; CHECK-LABEL: @const_fpext_cvt_pkrtz(
-; CHECK-NEXT: [[CVT:%.*]] = insertelement <2 x half> <half 0xH4500, half poison>, half [[Y:%.*]], i64 1
+; CHECK-NEXT: [[CVT:%.*]] = insertelement <2 x half> <half f0x4500, half poison>, half [[Y:%.*]], i64 1
; CHECK-NEXT: ret <2 x half> [[CVT]]
;
%ext = fpext half %y to float
@@ -1183,8 +1183,8 @@ define <2 x half> @const_fpext_cvt_pkrtz(half %y) {
define <2 x half> @const_fpext_multi_cvt_pkrtz(half %y) {
; CHECK-LABEL: @const_fpext_multi_cvt_pkrtz(
-; CHECK-NEXT: [[CVT1:%.*]] = insertelement <2 x half> <half 0xH4500, half poison>, half [[Y:%.*]], i64 1
-; CHECK-NEXT: [[CVT2:%.*]] = insertelement <2 x half> <half 0xH4200, half poison>, half [[Y]], i64 1
+; CHECK-NEXT: [[CVT1:%.*]] = insertelement <2 x half> <half f0x4500, half poison>, half [[Y:%.*]], i64 1
+; CHECK-NEXT: [[CVT2:%.*]] = insertelement <2 x half> <half f0x4200, half poison>, half [[Y]], i64 1
; CHECK-NEXT: [[ADD:%.*]] = fadd <2 x half> [[CVT1]], [[CVT2]]
; CHECK-NEXT: ret <2 x half> [[ADD]]
;
@@ -1673,9 +1673,9 @@ declare void @llvm.amdgcn.exp.compr.v2f16(i32 immarg, i32 immarg, <2 x half>, <2
define void @exp_compr_disabled_inputs_to_undef(<2 x half> %xy, <2 x half> %zw) {
; CHECK-LABEL: @exp_compr_disabled_inputs_to_undef(
; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 0, <2 x half> undef, <2 x half> undef, i1 true, i1 false)
-; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 1, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> undef, i1 true, i1 false)
-; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 2, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> undef, i1 true, i1 false)
-; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 3, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> undef, i1 true, i1 false)
+; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 1, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> undef, i1 true, i1 false)
+; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 2, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> undef, i1 true, i1 false)
+; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 3, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> undef, i1 true, i1 false)
; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 0, <2 x half> undef, <2 x half> undef, i1 true, i1 false)
; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 1, <2 x half> [[XY:%.*]], <2 x half> undef, i1 true, i1 false)
; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 2, <2 x half> [[XY]], <2 x half> undef, i1 true, i1 false)
@@ -4024,7 +4024,7 @@ define amdgpu_kernel void @image_sample_a16_c_d_o_2darray_V2(ptr addrspace(1) %o
define amdgpu_kernel void @image_sample_a16_c_d_o_2darray_const(ptr addrspace(1) %out, <8 x i32> inreg %rsrc, <4 x i32> inreg %samp, i32 %offset, float %zcompare, half %dsdh, half %dtdh, half %dsdv, half %dtdv, half %s, half %slice) {
; CHECK-LABEL: @image_sample_a16_c_d_o_2darray_const(
-; CHECK-NEXT: [[RES:%.*]] = call <2 x float> @llvm.amdgcn.image.sample.c.d.o.2darray.v2f32.f16.f16.v8i32.v4i32(i32 6, i32 [[OFFSET:%.*]], float [[ZCOMPARE:%.*]], half [[DSDH:%.*]], half [[DTDH:%.*]], half [[DSDV:%.*]], half [[DTDV:%.*]], half [[S:%.*]], half 0xH3400, half [[SLICE:%.*]], <8 x i32> [[RSRC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; CHECK-NEXT: [[RES:%.*]] = call <2 x float> @llvm.amdgcn.image.sample.c.d.o.2darray.v2f32.f16.f16.v8i32.v4i32(i32 6, i32 [[OFFSET:%.*]], float [[ZCOMPARE:%.*]], half [[DSDH:%.*]], half [[DTDH:%.*]], half [[DSDV:%.*]], half [[DTDV:%.*]], half [[S:%.*]], half f0x3400, half [[SLICE:%.*]], <8 x i32> [[RSRC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
; CHECK-NEXT: store <2 x float> [[RES]], ptr addrspace(1) [[OUT:%.*]], align 8
; CHECK-NEXT: ret void
;
@@ -6182,7 +6182,7 @@ define float @test_constant_fold_log_f32_snan() {
define half @test_constant_fold_log_f16_p0() {
; CHECK-LABEL: @test_constant_fold_log_f16_p0(
-; CHECK-NEXT: ret half 0xHFC00
+; CHECK-NEXT: ret half f0xFC00
;
%val = call half @llvm.amdgcn.log.f16(half 0.0)
ret half %val
@@ -6190,7 +6190,7 @@ define half @test_constant_fold_log_f16_p0() {
define half @test_constant_fold_log_f16_neg10() {
; CHECK-LABEL: @test_constant_fold_log_f16_neg10(
-; CHECK-NEXT: ret half 0xH7E00
+; CHECK-NEXT: ret half f0x7E00
;
%val = call half @llvm.amdgcn.log.f16(half -10.0)
ret half %val
@@ -6251,18 +6251,18 @@ define float @test_constant_fold_log_f32_ninf_strictfp() strictfp {
define half @test_constant_fold_log_f16_denorm() {
; CHECK-LABEL: @test_constant_fold_log_f16_denorm(
-; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.log.f16(half 0xH03FF)
+; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.log.f16(half f0x03FF)
; CHECK-NEXT: ret half [[VAL]]
;
- %val = call half @llvm.amdgcn.log.f16(half 0xH03ff)
+ %val = call half @llvm.amdgcn.log.f16(half f0x03ff)
ret half %val
}
define half @test_constant_fold_log_f16_neg_denorm() {
; CHECK-LABEL: @test_constant_fold_log_f16_neg_denorm(
-; CHECK-NEXT: ret half 0xH7E00
+; CHECK-NEXT: ret half f0x7E00
;
- %val = call half @llvm.amdgcn.log.f16(half 0xH83ff)
+ %val = call half @llvm.amdgcn.log.f16(half f0x83ff)
ret half %val
}
@@ -6427,7 +6427,7 @@ define float @test_constant_fold_exp2_f32_snan() {
define half @test_constant_fold_exp2_f16_p0() {
; CHECK-LABEL: @test_constant_fold_exp2_f16_p0(
-; CHECK-NEXT: ret half 0xH3C00
+; CHECK-NEXT: ret half f0x3C00
;
%val = call half @llvm.amdgcn.exp2.f16(half 0.0)
ret half %val
@@ -6435,7 +6435,7 @@ define half @test_constant_fold_exp2_f16_p0() {
define half @test_constant_fold_exp2_f16_neg10() {
; CHECK-LABEL: @test_constant_fold_exp2_f16_neg10(
-; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half 0xHC900)
+; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half f0xC900)
; CHECK-NEXT: ret half [[VAL]]
;
%val = call half @llvm.amdgcn.exp2.f16(half -10.0)
@@ -6532,19 +6532,19 @@ define float @test_constant_fold_exp2_f32_ninf_strictfp() strictfp {
define half @test_constant_fold_exp2_f16_denorm() {
; CHECK-LABEL: @test_constant_fold_exp2_f16_denorm(
-; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half 0xH03FF)
+; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half f0x03FF)
; CHECK-NEXT: ret half [[VAL]]
;
- %val = call half @llvm.amdgcn.exp2.f16(half 0xH03ff)
+ %val = call half @llvm.amdgcn.exp2.f16(half f0x03ff)
ret half %val
}
define half @test_constant_fold_exp2_f16_neg_denorm() {
; CHECK-LABEL: @test_constant_fold_exp2_f16_neg_denorm(
-; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half 0xH83FF)
+; CHECK-NEXT: [[VAL:%.*]] = call half @llvm.amdgcn.exp2.f16(half f0x83FF)
; CHECK-NEXT: ret half [[VAL]]
;
- %val = call half @llvm.amdgcn.exp2.f16(half 0xH83ff)
+ %val = call half @llvm.amdgcn.exp2.f16(half f0x83ff)
ret half %val
}
diff --git a/llvm/test/Transforms/InstCombine/AMDGPU/fmed3.ll b/llvm/test/Transforms/InstCombine/AMDGPU/fmed3.ll
index a31b47b2ca6e70..2b363a96719664 100644
--- a/llvm/test/Transforms/InstCombine/AMDGPU/fmed3.ll
+++ b/llvm/test/Transforms/InstCombine/AMDGPU/fmed3.ll
@@ -67,7 +67,7 @@ define float @fmed3_f32_fpext_f16_k0(half %arg1, half %arg2) #1 {
;
; GFX9-LABEL: define float @fmed3_f32_fpext_f16_k0
; GFX9-SAME: (half [[ARG1:%.*]], half [[ARG2:%.*]]) #[[ATTR1]] {
-; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG1]], half [[ARG2]], half 0xH4000)
+; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG1]], half [[ARG2]], half f0x4000)
; GFX9-NEXT: [[MED3:%.*]] = fpext half [[MED31]] to float
; GFX9-NEXT: ret float [[MED3]]
;
@@ -87,7 +87,7 @@ define float @fmed3_f32_fpext_f16_k1(half %arg0, half %arg2) #1 {
;
; GFX9-LABEL: define float @fmed3_f32_fpext_f16_k1
; GFX9-SAME: (half [[ARG0:%.*]], half [[ARG2:%.*]]) #[[ATTR1]] {
-; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG0]], half [[ARG2]], half 0xH4000)
+; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG0]], half [[ARG2]], half f0x4000)
; GFX9-NEXT: [[MED3:%.*]] = fpext half [[MED31]] to float
; GFX9-NEXT: ret float [[MED3]]
;
@@ -107,7 +107,7 @@ define float @fmed3_f32_fpext_f16_k2(half %arg0, half %arg1) #1 {
;
; GFX9-LABEL: define float @fmed3_f32_fpext_f16_k2
; GFX9-SAME: (half [[ARG0:%.*]], half [[ARG1:%.*]]) #[[ATTR1]] {
-; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG0]], half [[ARG1]], half 0xH4000)
+; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG0]], half [[ARG1]], half f0x4000)
; GFX9-NEXT: [[MED3:%.*]] = fpext half [[MED31]] to float
; GFX9-NEXT: ret float [[MED3]]
;
@@ -126,7 +126,7 @@ define float @fmed3_f32_fpext_f16_k0_k1(half %arg2) #1 {
;
; GFX9-LABEL: define float @fmed3_f32_fpext_f16_k0_k1
; GFX9-SAME: (half [[ARG2:%.*]]) #[[ATTR1]] {
-; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG2]], half 0xH0000, half 0xH4C00)
+; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG2]], half f0x0000, half f0x4C00)
; GFX9-NEXT: [[MED3:%.*]] = fpext half [[MED31]] to float
; GFX9-NEXT: ret float [[MED3]]
;
@@ -144,7 +144,7 @@ define float @fmed3_f32_fpext_f16_k0_k2(half %arg1) #1 {
;
; GFX9-LABEL: define float @fmed3_f32_fpext_f16_k0_k2
; GFX9-SAME: (half [[ARG1:%.*]]) #[[ATTR1]] {
-; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG1]], half 0xH0000, half 0xH4000)
+; GFX9-NEXT: [[MED31:%.*]] = call half @llvm.amdgcn.fmed3.f16(half [[ARG1]], half f0x0000, half f0x4000)
; GFX9-NEXT: [[MED3:%.*]] = fpext half [[MED31]] to float
; GFX9-NEXT: ret float [[MED3]]
;
diff --git a/llvm/test/Transforms/InstCombine/X86/2009-03-23-i80-fp80.ll b/llvm/test/Transforms/InstCombine/X86/2009-03-23-i80-fp80.ll
index 1e2396e9528870..35533595c251ec 100644
--- a/llvm/test/Transforms/InstCombine/X86/2009-03-23-i80-fp80.ll
+++ b/llvm/test/Transforms/InstCombine/X86/2009-03-23-i80-fp80.ll
@@ -7,13 +7,13 @@ define i80 @from() {
; CHECK-LABEL: @from(
; CHECK-NEXT: ret i80 302245289961712575840256
;
- %tmp = bitcast x86_fp80 0xK4000C000000000000000 to i80
+ %tmp = bitcast x86_fp80 f0x4000C000000000000000 to i80
ret i80 %tmp
}
define x86_fp80 @to() {
; CHECK-LABEL: @to(
-; CHECK-NEXT: ret x86_fp80 0xK40018000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x40018000000000000000
;
%tmp = bitcast i80 302259125019767858003968 to x86_fp80
ret x86_fp80 %tmp
diff --git a/llvm/test/Transforms/InstCombine/and-fcmp.ll b/llvm/test/Transforms/InstCombine/and-fcmp.ll
index c7bbc8ab56f9a6..c4196e41f1862e 100644
--- a/llvm/test/Transforms/InstCombine/and-fcmp.ll
+++ b/llvm/test/Transforms/InstCombine/and-fcmp.ll
@@ -4631,12 +4631,12 @@ define i1 @intersect_fmf_4(double %a, double %b) {
define i1 @clang_builtin_isnormal_inf_check(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp uge half %fabs.x, 0xH7C00
+ %cmp = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4644,12 +4644,12 @@ define i1 @clang_builtin_isnormal_inf_check(half %x) {
define <2 x i1> @clang_builtin_isnormal_inf_check_vector(<2 x half> %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_vector(
; CHECK-NEXT: [[FABS_X:%.*]] = call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq <2 x half> [[FABS_X]], splat (half 0xH7C00)
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq <2 x half> [[FABS_X]], splat (half f0x7C00)
; CHECK-NEXT: ret <2 x i1> [[AND]]
;
%fabs.x = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
%ord = fcmp ord <2 x half> %fabs.x, zeroinitializer
- %cmp = fcmp uge <2 x half> %fabs.x, <half 0xH7C00, half 0xH7C00>
+ %cmp = fcmp uge <2 x half> %fabs.x, <half f0x7C00, half f0x7C00>
%and = and <2 x i1> %ord, %cmp
ret <2 x i1> %and
}
@@ -4657,12 +4657,12 @@ define <2 x i1> @clang_builtin_isnormal_inf_check_vector(<2 x half> %x) {
define i1 @clang_builtin_isnormal_inf_check_commute(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_commute(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp uge half %fabs.x, 0xH7C00
+ %cmp = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %cmp, %ord
ret i1 %and
}
@@ -4670,12 +4670,12 @@ define i1 @clang_builtin_isnormal_inf_check_commute(half %x) {
define i1 @clang_builtin_isnormal_inf_check_commute_nsz_rhs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_commute_nsz_rhs(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp nsz ord half %fabs.x, 0.0
- %cmp = fcmp uge half %fabs.x, 0xH7C00
+ %cmp = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %cmp, %ord
ret i1 %and
}
@@ -4683,23 +4683,23 @@ define i1 @clang_builtin_isnormal_inf_check_commute_nsz_rhs(half %x) {
define i1 @clang_builtin_isnormal_inf_check_commute_nsz_lhs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_commute_nsz_lhs(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp nsz uge half %fabs.x, 0xH7C00
+ %cmp = fcmp nsz uge half %fabs.x, f0x7C00
%and = and i1 %cmp, %ord
ret i1 %and
}
define i1 @clang_builtin_isnormal_inf_check_commute_nofabs_ueq(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_commute_nofabs_ueq(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%ord = fcmp ord half %x, 0.0
- %cmp = fcmp ueq half %x, 0xH7C00
+ %cmp = fcmp ueq half %x, f0x7C00
%and = and i1 %cmp, %ord
ret i1 %and
}
@@ -4707,12 +4707,12 @@ define i1 @clang_builtin_isnormal_inf_check_commute_nofabs_ueq(half %x) {
define i1 @clang_builtin_isnormal_inf_check_commute_nsz(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_commute_nsz(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp nsz oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp nsz oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp nsz ord half %fabs.x, 0.0
- %cmp = fcmp nsz uge half %fabs.x, 0xH7C00
+ %cmp = fcmp nsz uge half %fabs.x, f0x7C00
%and = and i1 %cmp, %ord
ret i1 %and
}
@@ -4724,7 +4724,7 @@ define i1 @clang_builtin_isnormal_inf_check_ugt(half %x) {
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ugt half %fabs.x, 0xH7C00
+ %cmp = fcmp ugt half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4733,12 +4733,12 @@ define i1 @clang_builtin_isnormal_inf_check_ugt(half %x) {
define i1 @clang_builtin_isnormal_inf_check_ult(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_ult(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ult half %fabs.x, 0xH7C00
+ %cmp = fcmp ult half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4746,12 +4746,12 @@ define i1 @clang_builtin_isnormal_inf_check_ult(half %x) {
; ule -> ole
define i1 @clang_builtin_isnormal_inf_check_ule(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_ule(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[ORD]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ule half %fabs.x, 0xH7C00
+ %cmp = fcmp ule half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4760,12 +4760,12 @@ define i1 @clang_builtin_isnormal_inf_check_ule(half %x) {
define i1 @clang_builtin_isnormal_inf_check_ueq(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_ueq(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ueq half %fabs.x, 0xH7C00
+ %cmp = fcmp ueq half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4774,12 +4774,12 @@ define i1 @clang_builtin_isnormal_inf_check_ueq(half %x) {
define i1 @clang_builtin_isnormal_inf_check_une(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_une(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp une half %fabs.x, 0xH7C00
+ %cmp = fcmp une half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4791,7 +4791,7 @@ define i1 @clang_builtin_isnormal_inf_check_uno(half %x) {
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp uno half %fabs.x, 0xH7C00
+ %cmp = fcmp uno half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4799,12 +4799,12 @@ define i1 @clang_builtin_isnormal_inf_check_uno(half %x) {
; ord -> ord
define i1 @clang_builtin_isnormal_inf_check_ord(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_ord(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ord half %fabs.x, 0xH7C00
+ %cmp = fcmp ord half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4812,12 +4812,12 @@ define i1 @clang_builtin_isnormal_inf_check_ord(half %x) {
define i1 @clang_builtin_isnormal_inf_check_oge(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_oge(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp oge half %fabs.x, 0xH7C00
+ %cmp = fcmp oge half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4825,24 +4825,24 @@ define i1 @clang_builtin_isnormal_inf_check_oge(half %x) {
define i1 @clang_builtin_isnormal_inf_check_olt(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_olt(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp olt half %fabs.x, 0xH7C00
+ %cmp = fcmp olt half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
define i1 @clang_builtin_isnormal_inf_check_ole(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_ole(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp ole half %fabs.x, 0xH7C00
+ %cmp = fcmp ole half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4850,12 +4850,12 @@ define i1 @clang_builtin_isnormal_inf_check_ole(half %x) {
define i1 @clang_builtin_isnormal_inf_check_oeq(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_oeq(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp = fcmp oeq half %fabs.x, 0xH7C00
+ %cmp = fcmp oeq half %fabs.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4863,12 +4863,12 @@ define i1 @clang_builtin_isnormal_inf_check_oeq(half %x) {
define i1 @clang_builtin_isnormal_inf_check_unnececcary_fabs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_unnececcary_fabs(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %x, 0.0
- %ueq = fcmp uge half %fabs.x, 0xH7C00
+ %ueq = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
@@ -4876,36 +4876,36 @@ define i1 @clang_builtin_isnormal_inf_check_unnececcary_fabs(half %x) {
; Negative test
define i1 @clang_builtin_isnormal_inf_check_not_ord(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_not_ord(
-; CHECK-NEXT: [[AND:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[AND:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp uno half %fabs.x, 0.0
- %ueq = fcmp uge half %fabs.x, 0xH7C00
+ %ueq = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
define i1 @clang_builtin_isnormal_inf_check_missing_fabs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_missing_fabs(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %x, 0.0
- %ueq = fcmp uge half %x, 0xH7C00
+ %ueq = fcmp uge half %x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
define i1 @clang_builtin_isnormal_inf_check_neg_inf(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_neg_inf(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[ORD]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %ueq = fcmp uge half %fabs.x, 0xHFC00
+ %ueq = fcmp uge half %fabs.x, f0xFC00
%and = and i1 %ord, %ueq
ret i1 %and
}
@@ -4914,14 +4914,14 @@ define i1 @clang_builtin_isnormal_inf_check_neg_inf(half %x) {
define i1 @clang_builtin_isnormal_inf_check_not_inf(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_not_inf(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
-; CHECK-NEXT: [[UEQ:%.*]] = fcmp uge half [[FABS_X]], 0xH7BFF
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
+; CHECK-NEXT: [[UEQ:%.*]] = fcmp uge half [[FABS_X]], f0x7BFF
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %ueq = fcmp uge half %fabs.x, 0xH7BFF
+ %ueq = fcmp uge half %fabs.x, f0x7BFF
%and = and i1 %ord, %ueq
ret i1 %and
}
@@ -4929,12 +4929,12 @@ define i1 @clang_builtin_isnormal_inf_check_not_inf(half %x) {
define i1 @clang_builtin_isnormal_inf_check_nsz_lhs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_nsz_lhs(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp nsz ord half %fabs.x, 0.0
- %ueq = fcmp uge half %fabs.x, 0xH7C00
+ %ueq = fcmp uge half %fabs.x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
@@ -4942,12 +4942,12 @@ define i1 @clang_builtin_isnormal_inf_check_nsz_lhs(half %x) {
define i1 @clang_builtin_isnormal_inf_check_nsz_rhs(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_nsz_rhs(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %ueq = fcmp nsz uge half %fabs.x, 0xH7C00
+ %ueq = fcmp nsz uge half %fabs.x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
@@ -4955,24 +4955,24 @@ define i1 @clang_builtin_isnormal_inf_check_nsz_rhs(half %x) {
define i1 @clang_builtin_isnormal_inf_check_nsz(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_nsz(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp nsz oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp nsz oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp nsz ord half %fabs.x, 0.0
- %ueq = fcmp nsz uge half %fabs.x, 0xH7C00
+ %ueq = fcmp nsz uge half %fabs.x, f0x7C00
%and = and i1 %ord, %ueq
ret i1 %and
}
define i1 @clang_builtin_isnormal_inf_check_fneg(half %x) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_fneg(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fneg.x = fneg half %x
%ord = fcmp ord half %fneg.x, 0.0
- %cmp = fcmp uge half %x, 0xH7C00
+ %cmp = fcmp uge half %x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4980,12 +4980,12 @@ define i1 @clang_builtin_isnormal_inf_check_fneg(half %x) {
define i1 @clang_builtin_isnormal_inf_check_copysign(half %x, half %y) {
; CHECK-LABEL: @clang_builtin_isnormal_inf_check_copysign(
; CHECK-NEXT: [[COPYSIGN_X:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[Y:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[COPYSIGN_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[COPYSIGN_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%copysign.x = call half @llvm.copysign.f16(half %x, half %y)
%ord = fcmp ord half %x, 0.0
- %cmp = fcmp uge half %copysign.x, 0xH7C00
+ %cmp = fcmp uge half %copysign.x, f0x7C00
%and = and i1 %ord, %cmp
ret i1 %and
}
@@ -4993,12 +4993,12 @@ define i1 @clang_builtin_isnormal_inf_check_copysign(half %x, half %y) {
define i1 @isnormal_logical_select_0(half %x) {
; CHECK-LABEL: @isnormal_logical_select_0(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp.inf = fcmp uge half %fabs.x, 0xH7C00
+ %cmp.inf = fcmp uge half %fabs.x, f0x7C00
%and = select i1 %ord, i1 %cmp.inf, i1 false
ret i1 %and
}
@@ -5006,12 +5006,12 @@ define i1 @isnormal_logical_select_0(half %x) {
define i1 @isnormal_logical_select_1(half %x) {
; CHECK-LABEL: @isnormal_logical_select_1(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp.inf = fcmp uge half %fabs.x, 0xH7C00
+ %cmp.inf = fcmp uge half %fabs.x, f0x7C00
%and = select i1 %cmp.inf, i1 %ord, i1 false
ret i1 %and
}
@@ -5019,12 +5019,12 @@ define i1 @isnormal_logical_select_1(half %x) {
define i1 @isnormal_logical_select_0_fmf0(half %x) {
; CHECK-LABEL: @isnormal_logical_select_0_fmf0(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp reassoc nsz arcp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp reassoc nsz arcp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp nsz arcp reassoc ord half %fabs.x, 0.0
- %cmp.inf = fcmp nsz arcp reassoc uge half %fabs.x, 0xH7C00
+ %cmp.inf = fcmp nsz arcp reassoc uge half %fabs.x, f0x7C00
%and = select i1 %ord, i1 %cmp.inf, i1 false
ret i1 %and
}
@@ -5032,12 +5032,12 @@ define i1 @isnormal_logical_select_0_fmf0(half %x) {
define i1 @isnormal_logical_select_0_fmf1(half %x) {
; CHECK-LABEL: @isnormal_logical_select_0_fmf1(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS_X]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs.x = call half @llvm.fabs.f16(half %x)
%ord = fcmp ord half %fabs.x, 0.0
- %cmp.inf = fcmp nsz arcp reassoc uge half %fabs.x, 0xH7C00
+ %cmp.inf = fcmp nsz arcp reassoc uge half %fabs.x, f0x7C00
%and = select i1 %ord, i1 %cmp.inf, i1 false
ret i1 %and
}
diff --git a/llvm/test/Transforms/InstCombine/binop-itofp.ll b/llvm/test/Transforms/InstCombine/binop-itofp.ll
index 702bbbbf7d1760..dcc10c8680452a 100644
--- a/llvm/test/Transforms/InstCombine/binop-itofp.ll
+++ b/llvm/test/Transforms/InstCombine/binop-itofp.ll
@@ -50,7 +50,7 @@ define half @test_ui_ui_i8_add_C_fail_no_repr(i8 noundef %x_in) {
; CHECK-LABEL: @test_ui_ui_i8_add_C_fail_no_repr(
; CHECK-NEXT: [[X:%.*]] = and i8 [[X_IN:%.*]], 127
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i8 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], 0xH57F8
+; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], f0x57F8
; CHECK-NEXT: ret half [[R]]
;
%x = and i8 %x_in, 127
@@ -63,7 +63,7 @@ define half @test_ui_ui_i8_add_C_fail_overflow(i8 noundef %x_in) {
; CHECK-LABEL: @test_ui_ui_i8_add_C_fail_overflow(
; CHECK-NEXT: [[X:%.*]] = and i8 [[X_IN:%.*]], 127
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i8 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], 0xH5808
+; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], f0x5808
; CHECK-NEXT: ret half [[R]]
;
%x = and i8 %x_in, 127
@@ -284,7 +284,7 @@ define half @test_ui_ui_i8_mul_C_fail_overlow(i8 noundef %x_in) {
; CHECK-LABEL: @test_ui_ui_i8_mul_C_fail_overlow(
; CHECK-NEXT: [[X:%.*]] = and i8 [[X_IN:%.*]], 14
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i8 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xH4CC0
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0x4CC0
; CHECK-NEXT: ret half [[R]]
;
%x = and i8 %x_in, 14
@@ -333,7 +333,7 @@ define half @test_si_si_i8_mul_C_fail_no_repr(i8 noundef %x_in) {
; CHECK-NEXT: [[XX:%.*]] = and i8 [[X_IN:%.*]], 6
; CHECK-NEXT: [[X:%.*]] = or disjoint i8 [[XX]], 1
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i8 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xHC780
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0xC780
; CHECK-NEXT: ret half [[R]]
;
%xx = and i8 %x_in, 6
@@ -348,7 +348,7 @@ define half @test_si_si_i8_mul_C_fail_overflow(i8 noundef %x_in) {
; CHECK-NEXT: [[XX:%.*]] = and i8 [[X_IN:%.*]], 6
; CHECK-NEXT: [[X:%.*]] = or disjoint i8 [[XX]], 1
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i8 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xHCCC0
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0xCCC0
; CHECK-NEXT: ret half [[R]]
;
%xx = and i8 %x_in, 6
@@ -464,7 +464,7 @@ define half @test_ui_ui_i16_add_C_fail_overflow(i16 noundef %x_in) {
; CHECK-LABEL: @test_ui_ui_i16_add_C_fail_overflow(
; CHECK-NEXT: [[X:%.*]] = and i16 [[X_IN:%.*]], 2047
; CHECK-NEXT: [[XF:%.*]] = uitofp nneg i16 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], 0xH7BD0
+; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], f0x7BD0
; CHECK-NEXT: ret half [[R]]
;
%x = and i16 %x_in, 2047
@@ -512,12 +512,12 @@ define half @test_si_si_i16_add_C_overflow(i16 noundef %x_in) {
; CHECK-LABEL: @test_si_si_i16_add_C_overflow(
; CHECK-NEXT: [[X:%.*]] = or i16 [[X_IN:%.*]], -2048
; CHECK-NEXT: [[XF:%.*]] = sitofp i16 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], 0xH7840
+; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], f0x7840
; CHECK-NEXT: ret half [[R]]
;
%x = or i16 %x_in, -2048
%xf = sitofp i16 %x to half
- %r = fadd half %xf, 0xH7840
+ %r = fadd half %xf, f0x7840
ret half %r
}
@@ -661,7 +661,7 @@ define half @test_si_si_i16_mul_C_fail_overflow(i16 noundef %x_in) {
; CHECK-LABEL: @test_si_si_i16_mul_C_fail_overflow(
; CHECK-NEXT: [[X:%.*]] = or i16 [[X_IN:%.*]], -129
; CHECK-NEXT: [[XF:%.*]] = sitofp i16 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xH5800
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0x5800
; CHECK-NEXT: ret half [[R]]
;
%x = or i16 %x_in, -129
@@ -674,7 +674,7 @@ define half @test_si_si_i16_mul_C_fail_no_promotion(i16 noundef %x_in) {
; CHECK-LABEL: @test_si_si_i16_mul_C_fail_no_promotion(
; CHECK-NEXT: [[X:%.*]] = or i16 [[X_IN:%.*]], -4097
; CHECK-NEXT: [[XF:%.*]] = sitofp i16 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xH4500
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0x4500
; CHECK-NEXT: ret half [[R]]
;
%x = or i16 %x_in, -4097
@@ -774,7 +774,7 @@ define half @test_si_si_i12_add_C_fail_overflow(i12 noundef %x_in) {
; CHECK-LABEL: @test_si_si_i12_add_C_fail_overflow(
; CHECK-NEXT: [[X:%.*]] = or i12 [[X_IN:%.*]], -2048
; CHECK-NEXT: [[XF:%.*]] = sitofp i12 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], 0xHBC00
+; CHECK-NEXT: [[R:%.*]] = fadd half [[XF]], f0xBC00
; CHECK-NEXT: ret half [[R]]
;
%x = or i12 %x_in, -2048
@@ -963,7 +963,7 @@ define half @test_si_si_i12_mul_C_fail_overflow(i12 noundef %x_in) {
; CHECK-LABEL: @test_si_si_i12_mul_C_fail_overflow(
; CHECK-NEXT: [[X:%.*]] = or i12 [[X_IN:%.*]], -64
; CHECK-NEXT: [[XF:%.*]] = sitofp i12 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], 0xHD400
+; CHECK-NEXT: [[R:%.*]] = fmul half [[XF]], f0xD400
; CHECK-NEXT: ret half [[R]]
;
%x = or i12 %x_in, -64
diff --git a/llvm/test/Transforms/InstCombine/binop-select.ll b/llvm/test/Transforms/InstCombine/binop-select.ll
index 25f624ee134126..5bd3cf7263a7c2 100644
--- a/llvm/test/Transforms/InstCombine/binop-select.ll
+++ b/llvm/test/Transforms/InstCombine/binop-select.ll
@@ -361,21 +361,21 @@ define <2 x half> @fmul_sel_op1(i1 %b, <2 x half> %p) {
; CHECK-NEXT: ret <2 x half> zeroinitializer
;
%x = fadd <2 x half> %p, <half 1.0, half 2.0> ; thwart complexity-based canonicalization
- %s = select i1 %b, <2 x half> zeroinitializer, <2 x half> <half 0xHffff, half 0xHffff>
+ %s = select i1 %b, <2 x half> zeroinitializer, <2 x half> <half f0xffff, half f0xffff>
%r = fmul nnan nsz <2 x half> %x, %s
ret <2 x half> %r
}
define <2 x half> @fmul_sel_op1_use(i1 %b, <2 x half> %p) {
; CHECK-LABEL: @fmul_sel_op1_use(
-; CHECK-NEXT: [[X:%.*]] = fadd <2 x half> [[P:%.*]], <half 0xH3C00, half 0xH4000>
-; CHECK-NEXT: [[S:%.*]] = select i1 [[B:%.*]], <2 x half> zeroinitializer, <2 x half> splat (half 0xHFFFF)
+; CHECK-NEXT: [[X:%.*]] = fadd <2 x half> [[P:%.*]], <half f0x3C00, half f0x4000>
+; CHECK-NEXT: [[S:%.*]] = select i1 [[B:%.*]], <2 x half> zeroinitializer, <2 x half> splat (half f0xFFFF)
; CHECK-NEXT: call void @use_v2f16(<2 x half> [[S]])
; CHECK-NEXT: [[R:%.*]] = fmul nnan nsz <2 x half> [[X]], [[S]]
; CHECK-NEXT: ret <2 x half> [[R]]
;
%x = fadd <2 x half> %p, <half 1.0, half 2.0> ; thwart complexity-based canonicalization
- %s = select i1 %b, <2 x half> zeroinitializer, <2 x half> <half 0xHffff, half 0xHffff>
+ %s = select i1 %b, <2 x half> zeroinitializer, <2 x half> <half f0xffff, half f0xffff>
call void @use_v2f16(<2 x half> %s)
%r = fmul nnan nsz <2 x half> %x, %s
ret <2 x half> %r
diff --git a/llvm/test/Transforms/InstCombine/bitcast-inseltpoison.ll b/llvm/test/Transforms/InstCombine/bitcast-inseltpoison.ll
index 49e77009f3b1a0..9ce960d8879dcd 100644
--- a/llvm/test/Transforms/InstCombine/bitcast-inseltpoison.ll
+++ b/llvm/test/Transforms/InstCombine/bitcast-inseltpoison.ll
@@ -541,8 +541,8 @@ define void @constant_fold_vector_to_float() {
define void @constant_fold_vector_to_half() {
; CHECK-LABEL: @constant_fold_vector_to_half(
-; CHECK-NEXT: store volatile half 0xH4000, ptr undef, align 2
-; CHECK-NEXT: store volatile half 0xH4000, ptr undef, align 2
+; CHECK-NEXT: store volatile half f0x4000, ptr undef, align 2
+; CHECK-NEXT: store volatile half f0x4000, ptr undef, align 2
; CHECK-NEXT: ret void
;
store volatile half bitcast (<2 x i8> <i8 0, i8 64> to half), ptr undef
diff --git a/llvm/test/Transforms/InstCombine/bitcast-store.ll b/llvm/test/Transforms/InstCombine/bitcast-store.ll
index 3d4bd251e98a57..a387a60407d0ba 100644
--- a/llvm/test/Transforms/InstCombine/bitcast-store.ll
+++ b/llvm/test/Transforms/InstCombine/bitcast-store.ll
@@ -59,7 +59,7 @@ define void @ppcf128_ones_store(ptr %dest) {
; CHECK-LABEL: define void @ppcf128_ones_store
; CHECK-SAME: (ptr [[DEST:%.*]]) {
; CHECK-NEXT: entry:
-; CHECK-NEXT: store ppc_fp128 0xMFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF, ptr [[DEST]], align 16
+; CHECK-NEXT: store ppc_fp128 f0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF, ptr [[DEST]], align 16
; CHECK-NEXT: ret void
;
entry:
diff --git a/llvm/test/Transforms/InstCombine/bitcast.ll b/llvm/test/Transforms/InstCombine/bitcast.ll
index 37d41de3e99911..4cb7e2654ab4f5 100644
--- a/llvm/test/Transforms/InstCombine/bitcast.ll
+++ b/llvm/test/Transforms/InstCombine/bitcast.ll
@@ -668,8 +668,8 @@ define void @constant_fold_vector_to_float() {
define void @constant_fold_vector_to_half() {
; CHECK-LABEL: @constant_fold_vector_to_half(
-; CHECK-NEXT: store volatile half 0xH4000, ptr undef, align 2
-; CHECK-NEXT: store volatile half 0xH4000, ptr undef, align 2
+; CHECK-NEXT: store volatile half f0x4000, ptr undef, align 2
+; CHECK-NEXT: store volatile half f0x4000, ptr undef, align 2
; CHECK-NEXT: ret void
;
store volatile half bitcast (<2 x i8> <i8 0, i8 64> to half), ptr undef
diff --git a/llvm/test/Transforms/InstCombine/cabs-discrete.ll b/llvm/test/Transforms/InstCombine/cabs-discrete.ll
index 2e28ec9eb2ef87..fe1315899f871a 100644
--- a/llvm/test/Transforms/InstCombine/cabs-discrete.ll
+++ b/llvm/test/Transforms/InstCombine/cabs-discrete.ll
@@ -123,7 +123,7 @@ define fp128 @cabsl_zero_real(fp128 %imag) {
; CHECK-NEXT: [[CABS:%.*]] = tail call fp128 @llvm.fabs.f128(fp128 [[IMAG:%.*]])
; CHECK-NEXT: ret fp128 [[CABS]]
;
- %call = tail call fp128 @cabsl(fp128 0xL00000000000000000000000000000000, fp128 %imag)
+ %call = tail call fp128 @cabsl(fp128 f0x00000000000000000000000000000000, fp128 %imag)
ret fp128 %call
}
@@ -132,7 +132,7 @@ define fp128 @cabsl_zero_imag(fp128 %real) {
; CHECK-NEXT: [[CABS:%.*]] = tail call fp128 @llvm.fabs.f128(fp128 [[REAL:%.*]])
; CHECK-NEXT: ret fp128 [[CABS]]
;
- %call = tail call fp128 @cabsl(fp128 %real, fp128 0xL00000000000000000000000000000000)
+ %call = tail call fp128 @cabsl(fp128 %real, fp128 f0x00000000000000000000000000000000)
ret fp128 %call
}
@@ -141,7 +141,7 @@ define fp128 @fast_cabsl_neg_zero_imag(fp128 %real) {
; CHECK-NEXT: [[CABS:%.*]] = tail call fast fp128 @llvm.fabs.f128(fp128 [[REAL:%.*]])
; CHECK-NEXT: ret fp128 [[CABS]]
;
- %call = tail call fast fp128 @cabsl(fp128 %real, fp128 0xL00000000000000008000000000000000)
+ %call = tail call fast fp128 @cabsl(fp128 %real, fp128 f0x80000000000000000000000000000000)
ret fp128 %call
}
diff --git a/llvm/test/Transforms/InstCombine/canonicalize-const-to-bop.ll b/llvm/test/Transforms/InstCombine/canonicalize-const-to-bop.ll
index 68049ca230191e..142463f7bd5cf1 100644
--- a/llvm/test/Transforms/InstCombine/canonicalize-const-to-bop.ll
+++ b/llvm/test/Transforms/InstCombine/canonicalize-const-to-bop.ll
@@ -294,9 +294,9 @@ define i8 @multi_use_bop_negative(i8 %x) {
define half @float_negative(half %x) {
; CHECK-LABEL: define half @float_negative(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[ADD:%.*]] = fmul fast half [[X]], 0xH2E66
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ugt half [[X]], 0xH5640
-; CHECK-NEXT: [[S:%.*]] = select i1 [[CMP]], half 0xH4900, half [[ADD]]
+; CHECK-NEXT: [[ADD:%.*]] = fmul fast half [[X]], f0x2E66
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ugt half [[X]], f0x5640
+; CHECK-NEXT: [[S:%.*]] = select i1 [[CMP]], half f0x4900, half [[ADD]]
; CHECK-NEXT: ret half [[S]]
;
%add = fdiv fast half %x, 10.0
diff --git a/llvm/test/Transforms/InstCombine/canonicalize-fcmp-inf.ll b/llvm/test/Transforms/InstCombine/canonicalize-fcmp-inf.ll
index a85d7932f9b7ef..15cb5d10c7f2b1 100644
--- a/llvm/test/Transforms/InstCombine/canonicalize-fcmp-inf.ll
+++ b/llvm/test/Transforms/InstCombine/canonicalize-fcmp-inf.ll
@@ -4,20 +4,20 @@
define i1 @olt_pinf(half %x) {
; CHECK-LABEL: define i1 @olt_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp one half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp one half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp olt half %x, 0xH7c00
+ %cmp = fcmp olt half %x, f0x7c00
ret i1 %cmp
}
define i1 @ole_pinf(half %x) {
; CHECK-LABEL: define i1 @ole_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ole half %x, 0xH7c00
+ %cmp = fcmp ole half %x, f0x7c00
ret i1 %cmp
}
@@ -26,27 +26,27 @@ define i1 @ogt_pinf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 false
;
- %cmp = fcmp ogt half %x, 0xH7c00
+ %cmp = fcmp ogt half %x, f0x7c00
ret i1 %cmp
}
define i1 @oge_pinf(half %x) {
; CHECK-LABEL: define i1 @oge_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp oge half %x, 0xH7c00
+ %cmp = fcmp oge half %x, f0x7c00
ret i1 %cmp
}
define i1 @ult_pinf(half %x) {
; CHECK-LABEL: define i1 @ult_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ult half %x, 0xH7c00
+ %cmp = fcmp ult half %x, f0x7c00
ret i1 %cmp
}
@@ -55,27 +55,27 @@ define i1 @ule_pinf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 true
;
- %cmp = fcmp ule half %x, 0xH7c00
+ %cmp = fcmp ule half %x, f0x7c00
ret i1 %cmp
}
define i1 @ugt_pinf(half %x) {
; CHECK-LABEL: define i1 @ugt_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ugt half %x, 0xH7c00
+ %cmp = fcmp ugt half %x, f0x7c00
ret i1 %cmp
}
define i1 @uge_pinf(half %x) {
; CHECK-LABEL: define i1 @uge_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ueq half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ueq half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp uge half %x, 0xH7c00
+ %cmp = fcmp uge half %x, f0x7c00
ret i1 %cmp
}
@@ -84,67 +84,67 @@ define i1 @olt_ninf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 false
;
- %cmp = fcmp olt half %x, 0xHfc00
+ %cmp = fcmp olt half %x, f0xfc00
ret i1 %cmp
}
define i1 @ole_ninf(half %x) {
; CHECK-LABEL: define i1 @ole_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X]], 0xHFC00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X]], f0xFC00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ole half %x, 0xHfc00
+ %cmp = fcmp ole half %x, f0xfc00
ret i1 %cmp
}
define i1 @ogt_ninf(half %x) {
; CHECK-LABEL: define i1 @ogt_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp one half [[X]], 0xHFC00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp one half [[X]], f0xFC00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ogt half %x, 0xHfc00
+ %cmp = fcmp ogt half %x, f0xfc00
ret i1 %cmp
}
define i1 @oge_ninf(half %x) {
; CHECK-LABEL: define i1 @oge_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp oge half %x, 0xHfc00
+ %cmp = fcmp oge half %x, f0xfc00
ret i1 %cmp
}
define i1 @ult_ninf(half %x) {
; CHECK-LABEL: define i1 @ult_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ult half %x, 0xHfc00
+ %cmp = fcmp ult half %x, f0xfc00
ret i1 %cmp
}
define i1 @ule_ninf(half %x) {
; CHECK-LABEL: define i1 @ule_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ueq half [[X]], 0xHFC00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ueq half [[X]], f0xFC00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ule half %x, 0xHfc00
+ %cmp = fcmp ule half %x, f0xfc00
ret i1 %cmp
}
define i1 @ugt_ninf(half %x) {
; CHECK-LABEL: define i1 @ugt_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X]], 0xHFC00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X]], f0xFC00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ugt half %x, 0xHfc00
+ %cmp = fcmp ugt half %x, f0xfc00
ret i1 %cmp
}
@@ -153,37 +153,37 @@ define i1 @uge_ninf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 true
;
- %cmp = fcmp uge half %x, 0xHfc00
+ %cmp = fcmp uge half %x, f0xfc00
ret i1 %cmp
}
define i1 @olt_pinf_fmf(half %x) {
; CHECK-LABEL: define i1 @olt_pinf_fmf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp nsz one half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp nsz one half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp nsz olt half %x, 0xH7c00
+ %cmp = fcmp nsz olt half %x, f0x7c00
ret i1 %cmp
}
define i1 @oge_pinf_fmf(half %x) {
; CHECK-LABEL: define i1 @oge_pinf_fmf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp nnan oeq half [[X]], 0xH7C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp nnan oeq half [[X]], f0x7C00
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp nnan oge half %x, 0xH7c00
+ %cmp = fcmp nnan oge half %x, f0x7c00
ret i1 %cmp
}
define <2 x i1> @olt_pinf_vec(<2 x half> %x) {
; CHECK-LABEL: define <2 x i1> @olt_pinf_vec(
; CHECK-SAME: <2 x half> [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp one <2 x half> [[X]], splat (half 0xH7C00)
+; CHECK-NEXT: [[CMP:%.*]] = fcmp one <2 x half> [[X]], splat (half f0x7C00)
; CHECK-NEXT: ret <2 x i1> [[CMP]]
;
- %cmp = fcmp olt <2 x half> %x, <half 0xH7c00, half 0xH7c00>
+ %cmp = fcmp olt <2 x half> %x, <half f0x7c00, half f0x7c00>
ret <2 x i1> %cmp
}
@@ -193,7 +193,7 @@ define <2 x i1> @oge_ninf_vec(<2 x half> %x) {
; CHECK-NEXT: [[CMP:%.*]] = fcmp ord <2 x half> [[X]], zeroinitializer
; CHECK-NEXT: ret <2 x i1> [[CMP]]
;
- %cmp = fcmp oge <2 x half> %x, <half 0xHfc00, half 0xHfc00>
+ %cmp = fcmp oge <2 x half> %x, <half f0xfc00, half f0xfc00>
ret <2 x i1> %cmp
}
@@ -202,20 +202,20 @@ define <2 x i1> @oge_ninf_vec(<2 x half> %x) {
define i1 @ord_pinf(half %x) {
; CHECK-LABEL: define i1 @ord_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ord half %x, 0xH7c00
+ %cmp = fcmp ord half %x, f0x7c00
ret i1 %cmp
}
define i1 @uno_pinf(half %x) {
; CHECK-LABEL: define i1 @uno_pinf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp uno half %x, 0xH7c00
+ %cmp = fcmp uno half %x, f0x7c00
ret i1 %cmp
}
@@ -224,7 +224,7 @@ define i1 @true_pinf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 true
;
- %cmp = fcmp true half %x, 0xH7c00
+ %cmp = fcmp true half %x, f0x7c00
ret i1 %cmp
}
@@ -233,27 +233,27 @@ define i1 @false_pinf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 false
;
- %cmp = fcmp false half %x, 0xH7c00
+ %cmp = fcmp false half %x, f0x7c00
ret i1 %cmp
}
define i1 @ord_ninf(half %x) {
; CHECK-LABEL: define i1 @ord_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp ord half %x, 0xHfc00
+ %cmp = fcmp ord half %x, f0xfc00
ret i1 %cmp
}
define i1 @uno_ninf(half %x) {
; CHECK-LABEL: define i1 @uno_ninf(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp uno half [[X]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
- %cmp = fcmp uno half %x, 0xHfc00
+ %cmp = fcmp uno half %x, f0xfc00
ret i1 %cmp
}
@@ -262,7 +262,7 @@ define i1 @true_ninf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 true
;
- %cmp = fcmp true half %x, 0xHfc00
+ %cmp = fcmp true half %x, f0xfc00
ret i1 %cmp
}
@@ -271,14 +271,14 @@ define i1 @false_ninf(half %x) {
; CHECK-SAME: half [[X:%.*]]) {
; CHECK-NEXT: ret i1 false
;
- %cmp = fcmp false half %x, 0xHfc00
+ %cmp = fcmp false half %x, f0xfc00
ret i1 %cmp
}
define i1 @olt_one(half %x) {
; CHECK-LABEL: define i1 @olt_one(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[CMP:%.*]] = fcmp olt half [[X]], 0xH3C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp olt half [[X]], f0x3C00
; CHECK-NEXT: ret i1 [[CMP]]
;
%cmp = fcmp olt half %x, 1.0
diff --git a/llvm/test/Transforms/InstCombine/cast-int-fcmp-eq-0.ll b/llvm/test/Transforms/InstCombine/cast-int-fcmp-eq-0.ll
index dfefa11b70a020..c4691dad79037e 100644
--- a/llvm/test/Transforms/InstCombine/cast-int-fcmp-eq-0.ll
+++ b/llvm/test/Transforms/InstCombine/cast-int-fcmp-eq-0.ll
@@ -274,11 +274,11 @@ define i1 @i64_cast_cmp_oeq_int_0_sitofp_half(i64 %i) {
define i1 @i32_cast_cmp_oeq_int_0_uitofp_ppcf128(i32 %i) {
; CHECK-LABEL: @i32_cast_cmp_oeq_int_0_uitofp_ppcf128(
; CHECK-NEXT: [[F:%.*]] = uitofp i32 [[I:%.*]] to ppc_fp128
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq ppc_fp128 [[F]], 0xM00000000000000000000000000000000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq ppc_fp128 [[F]], f0x00000000000000000000000000000000
; CHECK-NEXT: ret i1 [[CMP]]
;
%f = uitofp i32 %i to ppc_fp128
- %cmp = fcmp oeq ppc_fp128 %f, 0xM00000000000000000000000000000000
+ %cmp = fcmp oeq ppc_fp128 %f, f0x00000000000000000000000000000000
ret i1 %cmp
}
diff --git a/llvm/test/Transforms/InstCombine/cast.ll b/llvm/test/Transforms/InstCombine/cast.ll
index 0f957e22ad17bc..2419a37857c112 100644
--- a/llvm/test/Transforms/InstCombine/cast.ll
+++ b/llvm/test/Transforms/InstCombine/cast.ll
@@ -1442,7 +1442,7 @@ define <2 x i32> @test90() {
; LE-LABEL: @test90(
; LE-NEXT: ret <2 x i32> <i32 0, i32 1006632960>
;
- %t6 = bitcast <4 x half> <half poison, half poison, half poison, half 0xH3C00> to <2 x i32>
+ %t6 = bitcast <4 x half> <half poison, half poison, half poison, half f0x3C00> to <2 x i32>
ret <2 x i32> %t6
}
diff --git a/llvm/test/Transforms/InstCombine/combine-is.fpclass-and-fcmp.ll b/llvm/test/Transforms/InstCombine/combine-is.fpclass-and-fcmp.ll
index dcd79f58390023..9ba02de84fdefe 100644
--- a/llvm/test/Transforms/InstCombine/combine-is.fpclass-and-fcmp.ll
+++ b/llvm/test/Transforms/InstCombine/combine-is.fpclass-and-fcmp.ll
@@ -6,7 +6,7 @@ define i1 @fcmp_oeq_inf_or_class_normal(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 776)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
%or = or i1 %oeq.inf, %class
ret i1 %or
@@ -17,7 +17,7 @@ define i1 @class_normal_or_fcmp_oeq_inf(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 776)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
%or = or i1 %class, %oeq.inf
ret i1 %or
@@ -28,7 +28,7 @@ define <2 x i1> @fcmp_oeq_inf_or_class_normal_vector(<2 x half> %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call <2 x i1> @llvm.is.fpclass.v2f16(<2 x half> [[X:%.*]], i32 776)
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
- %oeq.inf = fcmp oeq <2 x half> %x, <half 0xH7C00, half 0xH7C00>
+ %oeq.inf = fcmp oeq <2 x half> %x, <half f0x7C00, half f0x7C00>
%class = call <2 x i1> @llvm.is.fpclass.v2f16(<2 x half> %x, i32 264)
%or = or <2 x i1> %oeq.inf, %class
ret <2 x i1> %or
@@ -36,13 +36,13 @@ define <2 x i1> @fcmp_oeq_inf_or_class_normal_vector(<2 x half> %x) {
define i1 @fcmp_oeq_inf_multi_use_or_class_normal(half %x, ptr %ptr) {
; CHECK-LABEL: @fcmp_oeq_inf_multi_use_or_class_normal(
-; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: store i1 [[OEQ_INF]], ptr [[PTR:%.*]], align 1
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[OEQ_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
store i1 %oeq.inf, ptr %ptr
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
%or = or i1 %oeq.inf, %class
@@ -51,13 +51,13 @@ define i1 @fcmp_oeq_inf_multi_use_or_class_normal(half %x, ptr %ptr) {
define i1 @fcmp_oeq_inf_or_class_normal_multi_use(half %x, ptr %ptr) {
; CHECK-LABEL: @fcmp_oeq_inf_or_class_normal_multi_use(
-; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: store i1 [[CLASS]], ptr [[PTR:%.*]], align 1
; CHECK-NEXT: [[OR:%.*]] = or i1 [[OEQ_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
store i1 %class, ptr %ptr
%or = or i1 %oeq.inf, %class
@@ -77,8 +77,8 @@ define i1 @fcmp_ord_or_class_isnan(half %x) {
define i1 @fcmp_ord_or_class_isnan_wrong_operand(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_or_class_isnan_wrong_operand(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[Y:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[Y:%.*]], f0x0000
; CHECK-NEXT: [[OR:%.*]] = or i1 [[ORD]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
;
@@ -127,7 +127,7 @@ define i1 @fcmp_isfinite_and_class_subnormal(half %x) {
; CHECK-NEXT: ret i1 [[SUBNORMAL_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp olt half %fabs, 0xH7C00
+ %is.finite = fcmp olt half %fabs, f0x7C00
%subnormal.class = call i1 @llvm.is.fpclass.f16(half %x, i32 144)
%and = and i1 %is.finite, %subnormal.class
ret i1 %and
@@ -136,11 +136,11 @@ define i1 @fcmp_isfinite_and_class_subnormal(half %x) {
define i1 @fcmp_isfinite_or_class_subnormal(half %x) {
; CHECK-LABEL: @fcmp_isfinite_or_class_subnormal(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[SUBNORMAL_CLASS:%.*]] = fcmp one half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[SUBNORMAL_CLASS:%.*]] = fcmp one half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[SUBNORMAL_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp olt half %fabs, 0xH7C00
+ %is.finite = fcmp olt half %fabs, f0x7C00
%subnormal.class = call i1 @llvm.is.fpclass.f16(half %x, i32 144)
%or = or i1 %is.finite, %subnormal.class
ret i1 %or
@@ -150,11 +150,11 @@ define i1 @fcmp_isfinite_or_class_subnormal(half %x) {
define i1 @fcmp_issubnormal_or_class_finite(half %x) {
; CHECK-LABEL: @fcmp_issubnormal_or_class_finite(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%is.finite.class = call i1 @llvm.is.fpclass.f16(half %x, i32 504)
%or = or i1 %is.subnormal, %is.finite.class
ret i1 %or
@@ -164,11 +164,11 @@ define i1 @fcmp_issubnormal_or_class_finite(half %x) {
define i1 @class_finite_or_fcmp_issubnormal(half %x) {
; CHECK-LABEL: @class_finite_or_fcmp_issubnormal(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%is.finite.class = call i1 @llvm.is.fpclass.f16(half %x, i32 504)
%or = or i1 %is.finite.class, %is.subnormal
ret i1 %or
@@ -181,7 +181,7 @@ define i1 @fcmp_issubnormal_and_class_finite(half %x) {
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%is.finite.class = call i1 @llvm.is.fpclass.f16(half %x, i32 504)
%and = and i1 %is.subnormal, %is.finite.class
ret i1 %and
@@ -193,7 +193,7 @@ define i1 @class_inf_or_fcmp_issubnormal(half %x) {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%is.inf.class = call i1 @llvm.is.fpclass.f16(half %x, i32 516)
%or = or i1 %is.inf.class, %is.subnormal
ret i1 %or
@@ -203,11 +203,11 @@ define i1 @class_inf_or_fcmp_issubnormal(half %x) {
define <2 x i1> @class_finite_or_fcmp_issubnormal_vector(<2 x half> %x) {
; CHECK-LABEL: @class_finite_or_fcmp_issubnormal_vector(
; CHECK-NEXT: [[TMP1:%.*]] = call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp one <2 x half> [[TMP1]], splat (half 0xH7C00)
+; CHECK-NEXT: [[OR:%.*]] = fcmp one <2 x half> [[TMP1]], splat (half f0x7C00)
; CHECK-NEXT: ret <2 x i1> [[OR]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %is.subnormal = fcmp olt <2 x half> %fabs, <half 0xH0400, half 0xH0400>
+ %is.subnormal = fcmp olt <2 x half> %fabs, <half f0x0400, half f0x0400>
%is.finite.class = call <2 x i1> @llvm.is.fpclass.v2f16(<2 x half> %x, i32 504)
%or = or <2 x i1> %is.finite.class, %is.subnormal
ret <2 x i1> %or
@@ -226,7 +226,7 @@ define i1 @fcmp_oeq_zero_or_class_normal(half %x) {
define i1 @fcmp_oeq_zero_or_class_normal_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_zero_or_class_normal_daz(
-; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[OEQ_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -252,7 +252,7 @@ define <2 x i1> @fcmp_oeq_zero_or_class_normal_daz_v2f16(<2 x half> %x) #1 {
define i1 @fcmp_oeq_zero_or_class_normal_dynamic(half %x) #2 {
; CHECK-LABEL: @fcmp_oeq_zero_or_class_normal_dynamic(
-; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OEQ_INF:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[OEQ_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -311,7 +311,7 @@ define i1 @class_normal_or_fcmp_ueq_zero(half %x) {
define i1 @fcmp_one_zero_or_class_normal(half %x) {
; CHECK-LABEL: @fcmp_one_zero_or_class_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%one.inf = fcmp one half %x, 0.0
@@ -322,7 +322,7 @@ define i1 @fcmp_one_zero_or_class_normal(half %x) {
define i1 @fcmp_one_zero_or_class_normal_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_one_zero_or_class_normal_daz(
-; CHECK-NEXT: [[ONE_INF:%.*]] = fcmp one half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ONE_INF:%.*]] = fcmp one half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[ONE_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -335,7 +335,7 @@ define i1 @fcmp_one_zero_or_class_normal_daz(half %x) #1 {
define i1 @fcmp_one_zero_or_class_normal_dynamic(half %x) #2 {
; CHECK-LABEL: @fcmp_one_zero_or_class_normal_dynamic(
-; CHECK-NEXT: [[ONE_INF:%.*]] = fcmp one half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ONE_INF:%.*]] = fcmp one half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[ONE_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -348,7 +348,7 @@ define i1 @fcmp_one_zero_or_class_normal_dynamic(half %x) #2 {
define i1 @class_normal_or_fcmp_one_zero(half %x) {
; CHECK-LABEL: @class_normal_or_fcmp_one_zero(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%one.inf = fcmp one half %x, 0.0
@@ -359,7 +359,7 @@ define i1 @class_normal_or_fcmp_one_zero(half %x) {
define i1 @fcmp_une_zero_or_class_normal(half %x) {
; CHECK-LABEL: @fcmp_une_zero_or_class_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%une.inf = fcmp une half %x, 0.0
@@ -370,7 +370,7 @@ define i1 @fcmp_une_zero_or_class_normal(half %x) {
define i1 @class_normal_or_fcmp_une_zero(half %x) {
; CHECK-LABEL: @class_normal_or_fcmp_une_zero(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%une.inf = fcmp une half %x, 0.0
@@ -381,7 +381,7 @@ define i1 @class_normal_or_fcmp_une_zero(half %x) {
define i1 @class_normal_or_fcmp_une_zero_daz(half %x) #1 {
; CHECK-LABEL: @class_normal_or_fcmp_une_zero_daz(
-; CHECK-NEXT: [[UNE_INF:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[UNE_INF:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[UNE_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -394,7 +394,7 @@ define i1 @class_normal_or_fcmp_une_zero_daz(half %x) #1 {
define i1 @class_normal_or_fcmp_une_zero_dynamic(half %x) #2 {
; CHECK-LABEL: @class_normal_or_fcmp_une_zero_dynamic(
-; CHECK-NEXT: [[UNE_INF:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[UNE_INF:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X]], i32 264)
; CHECK-NEXT: [[OR:%.*]] = or i1 [[UNE_INF]], [[CLASS]]
; CHECK-NEXT: ret i1 [[OR]]
@@ -410,7 +410,7 @@ define i1 @fcmp_oeq_inf_xor_class_normal(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 776)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
%xor = xor i1 %oeq.inf, %class
ret i1 %xor
@@ -421,7 +421,7 @@ define i1 @class_normal_xor_fcmp_oeq_inf(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 776)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.inf = fcmp oeq half %x, 0xH7C00
+ %oeq.inf = fcmp oeq half %x, f0x7C00
%class = call i1 @llvm.is.fpclass.f16(half %x, i32 264)
%xor = xor i1 %class, %oeq.inf
ret i1 %xor
diff --git a/llvm/test/Transforms/InstCombine/copysign-fneg-fabs.ll b/llvm/test/Transforms/InstCombine/copysign-fneg-fabs.ll
index ce3355b6df039e..35c5c46ae55edb 100644
--- a/llvm/test/Transforms/InstCombine/copysign-fneg-fabs.ll
+++ b/llvm/test/Transforms/InstCombine/copysign-fneg-fabs.ll
@@ -278,66 +278,66 @@ define half @fneg_fabs_copysign_multi_use_fabs(half %x, half %y, ptr %ptr) {
define half @copysign_pos(half %a) {
; CHECK-LABEL: @copysign_pos(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half 0xH3C00, half [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half f0x3C00, half [[A:%.*]])
; CHECK-NEXT: ret half [[RET]]
;
entry:
- %ret = call half @llvm.copysign.f16(half 0xH3C00, half %a)
+ %ret = call half @llvm.copysign.f16(half f0x3C00, half %a)
ret half %ret
}
define half @copysign_neg(half %a) {
; CHECK-LABEL: @copysign_neg(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half 0xH3C00, half [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half f0x3C00, half [[A:%.*]])
; CHECK-NEXT: ret half [[RET]]
;
entry:
- %ret = call half @llvm.copysign.f16(half 0xHBC00, half %a)
+ %ret = call half @llvm.copysign.f16(half f0xBC00, half %a)
ret half %ret
}
define half @copysign_negzero(half %a) {
; CHECK-LABEL: @copysign_negzero(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half 0xH0000, half [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half f0x0000, half [[A:%.*]])
; CHECK-NEXT: ret half [[RET]]
;
entry:
- %ret = call half @llvm.copysign.f16(half 0xH8000, half %a)
+ %ret = call half @llvm.copysign.f16(half f0x8000, half %a)
ret half %ret
}
define half @copysign_negnan(half %a) {
; CHECK-LABEL: @copysign_negnan(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half 0xH7E00, half [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half f0x7E00, half [[A:%.*]])
; CHECK-NEXT: ret half [[RET]]
;
entry:
- %ret = call half @llvm.copysign.f16(half 0xHFE00, half %a)
+ %ret = call half @llvm.copysign.f16(half f0xFE00, half %a)
ret half %ret
}
define half @copysign_neginf(half %a) {
; CHECK-LABEL: @copysign_neginf(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half 0xH7C00, half [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call half @llvm.copysign.f16(half f0x7C00, half [[A:%.*]])
; CHECK-NEXT: ret half [[RET]]
;
entry:
- %ret = call half @llvm.copysign.f16(half 0xHFC00, half %a)
+ %ret = call half @llvm.copysign.f16(half f0xFC00, half %a)
ret half %ret
}
define <4 x half> @copysign_splat(<4 x half> %a) {
; CHECK-LABEL: @copysign_splat(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call <4 x half> @llvm.copysign.v4f16(<4 x half> splat (half 0xH3C00), <4 x half> [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call <4 x half> @llvm.copysign.v4f16(<4 x half> splat (half f0x3C00), <4 x half> [[A:%.*]])
; CHECK-NEXT: ret <4 x half> [[RET]]
;
entry:
- %ret = call <4 x half> @llvm.copysign.v4f16(<4 x half> splat(half 0xHBC00), <4 x half> %a)
+ %ret = call <4 x half> @llvm.copysign.v4f16(<4 x half> splat(half f0xBC00), <4 x half> %a)
ret <4 x half> %ret
}
@@ -346,11 +346,11 @@ entry:
define <4 x half> @copysign_vec4(<4 x half> %a) {
; CHECK-LABEL: @copysign_vec4(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[RET:%.*]] = call <4 x half> @llvm.copysign.v4f16(<4 x half> <half 0xH3C00, half 0xHBC00, half undef, half poison>, <4 x half> [[A:%.*]])
+; CHECK-NEXT: [[RET:%.*]] = call <4 x half> @llvm.copysign.v4f16(<4 x half> <half f0x3C00, half f0xBC00, half undef, half poison>, <4 x half> [[A:%.*]])
; CHECK-NEXT: ret <4 x half> [[RET]]
;
entry:
- %ret = call <4 x half> @llvm.copysign.v4f16(<4 x half> <half 0xH3C00, half 0xHBC00, half undef, half poison>, <4 x half> %a)
+ %ret = call <4 x half> @llvm.copysign.v4f16(<4 x half> <half f0x3C00, half f0xBC00, half undef, half poison>, <4 x half> %a)
ret <4 x half> %ret
}
diff --git a/llvm/test/Transforms/InstCombine/cos-1.ll b/llvm/test/Transforms/InstCombine/cos-1.ll
index 168d88fb3a942c..f2f17da877a3fe 100644
--- a/llvm/test/Transforms/InstCombine/cos-1.ll
+++ b/llvm/test/Transforms/InstCombine/cos-1.ll
@@ -380,7 +380,7 @@ define fp128 @tanl_negated_arg(fp128 %x) {
; ANY-NEXT: [[R:%.*]] = fneg fp128 [[TMP1]]
; ANY-NEXT: ret fp128 [[R]]
;
- %neg = fsub fp128 0xL00000000000000008000000000000000, %x
+ %neg = fsub fp128 f0x80000000000000000000000000000000, %x
%r = call fp128 @tanl(fp128 %neg)
ret fp128 %r
}
diff --git a/llvm/test/Transforms/InstCombine/create-class-from-logic-fcmp.ll b/llvm/test/Transforms/InstCombine/create-class-from-logic-fcmp.ll
index 9a723e8bc89ff5..deab14af1a3906 100644
--- a/llvm/test/Transforms/InstCombine/create-class-from-logic-fcmp.ll
+++ b/llvm/test/Transforms/InstCombine/create-class-from-logic-fcmp.ll
@@ -14,8 +14,8 @@ define i1 @not_isfinite_or_zero_f16(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -27,8 +27,8 @@ define i1 @not_isfinite_or_zero_f16_commute_or(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpinf, %cmpzero
ret i1 %class
}
@@ -40,7 +40,7 @@ define i1 @not_isfinite_or_zero_f16_negzero(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
%cmpzero = fcmp oeq half %x, -0.0
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
@@ -52,8 +52,8 @@ define i1 @not_isfinite_or_fabs_oeq_zero_f16(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %fabs, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %fabs, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -65,7 +65,7 @@ define <2 x i1> @not_isfinite_or_zero_v2f16(<2 x half> %x) {
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp ueq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp ueq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp oeq <2 x half> %x, zeroinitializer
%class = or <2 x i1> %cmpzero, %cmpinf
ret <2 x i1> %class
@@ -78,7 +78,7 @@ define <2 x i1> @not_isfinite_or_zero_v2f16_pos0_neg0_vec(<2 x half> %x) {
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp ueq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp ueq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp oeq <2 x half> %x, <half 0.0, half -0.0>
%class = or <2 x i1> %cmpzero, %cmpinf
ret <2 x i1> %class
@@ -91,7 +91,7 @@ define <2 x i1> @not_isfinite_or_zero_v2f16_commute_or(<2 x half> %x) {
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp ueq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp ueq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp oeq <2 x half> %x, zeroinitializer
%class = or <2 x i1> %cmpinf, %cmpzero
ret <2 x i1> %class
@@ -104,8 +104,8 @@ define i1 @oeq_isinf_or_oeq_zero(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -116,8 +116,8 @@ define i1 @ueq_inf_or_oeq_zero(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 611)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -129,8 +129,8 @@ define i1 @oeq_isinf_or_fabs_oeq_zero(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %fabs, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %fabs, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -142,8 +142,8 @@ define i1 @ueq_0_or_oeq_inf(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH0000
- %cmpzero = fcmp oeq half %x, 0xH7C00
+ %cmpinf = fcmp ueq half %fabs, f0x0000
+ %cmpzero = fcmp oeq half %x, f0x7C00
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -154,8 +154,8 @@ define i1 @not_isfinite_or_zero_f16_not_inf(half %x) {
; CHECK-NEXT: ret i1 true
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C01
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C01
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -167,8 +167,8 @@ define i1 @ueq_inf_or_ueq_zero(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp ueq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp ueq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -179,8 +179,8 @@ define i1 @not_isfinite_and_zero_f16(half %x) {
; CHECK-NEXT: ret i1 false
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = and i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -189,16 +189,16 @@ define i1 @not_isfinite_and_zero_f16(half %x) {
define i1 @not_isfinite_or_zero_f16_multi_use_cmp0(half %x, ptr %ptr) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_multi_use_cmp0(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
; CHECK-NEXT: store i1 [[CMPINF]], ptr [[PTR:%.*]], align 1
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
store i1 %cmpinf, ptr %ptr
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -207,15 +207,15 @@ define i1 @not_isfinite_or_zero_f16_multi_use_cmp0(half %x, ptr %ptr) {
define i1 @not_isfinite_or_zero_f16_multi_use_cmp1(half %x, ptr %ptr) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_multi_use_cmp1(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: store i1 [[CMPZERO]], ptr [[PTR:%.*]], align 1
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
store i1 %cmpzero, ptr %ptr
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
@@ -224,12 +224,12 @@ define i1 @not_isfinite_or_zero_f16_multi_use_cmp1(half %x, ptr %ptr) {
; Negative test
define i1 @not_isfinite_or_zero_f16_neg_inf(half %x) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_neg_inf(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ueq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ueq half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xHFC00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0xFC00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -241,8 +241,8 @@ define i1 @olt_0_or_fabs_ueq_inf(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp olt half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp olt half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -251,12 +251,12 @@ define i1 @olt_0_or_fabs_ueq_inf(half %x) {
define i1 @oeq_0_or_fabs_ult_inf(half %x) {
; CHECK-LABEL: @oeq_0_or_fabs_ult_inf(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ult half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ult half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -265,13 +265,13 @@ define i1 @oeq_0_or_fabs_ult_inf(half %x) {
define i1 @not_isfinite_or_zero_f16_multi_not_0(half %x, ptr %ptr) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_multi_not_0(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH3C00
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x3C00
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
%cmpzero = fcmp oeq half %x, 1.0
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
@@ -281,14 +281,14 @@ define i1 @not_isfinite_or_zero_f16_multi_not_0(half %x, ptr %ptr) {
define i1 @not_isfinite_or_zero_f16_fabs_wrong_val(half %x, half %y) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_fabs_wrong_val(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[Y:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %y)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -297,14 +297,14 @@ define i1 @not_isfinite_or_zero_f16_fabs_wrong_val(half %x, half %y) {
define i1 @not_isfinite_or_zero_f16_not_fabs(half %x) {
; CHECK-LABEL: @not_isfinite_or_zero_f16_not_fabs(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.canonicalize.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.canonicalize.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -320,8 +320,8 @@ define i1 @negated_isfinite_or_zero_f16(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -333,8 +333,8 @@ define i1 @negated_isfinite_or_zero_f16_commute_and(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = and i1 %cmpinf, %cmpzero
ret i1 %not.class
}
@@ -346,7 +346,7 @@ define i1 @negated_isfinite_or_zero_f16_negzero(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
+ %cmpinf = fcmp one half %fabs, f0x7C00
%cmpzero = fcmp une half %x, -0.0
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
@@ -359,7 +359,7 @@ define <2 x i1> @negated_isfinite_or_zero_v2f16(<2 x half> %x) {
; CHECK-NEXT: ret <2 x i1> [[NOT_CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp one <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp one <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp une <2 x half> %x, zeroinitializer
%not.class = and <2 x i1> %cmpzero, %cmpinf
ret <2 x i1> %not.class
@@ -372,7 +372,7 @@ define <2 x i1> @negated_isfinite_or_zero_v2f16_comumte(<2 x half> %x) {
; CHECK-NEXT: ret <2 x i1> [[NOT_CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp one <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp one <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp une <2 x half> %x, zeroinitializer
%not.class = and <2 x i1> %cmpinf, %cmpzero
ret <2 x i1> %not.class
@@ -385,8 +385,8 @@ define i1 @negated_isfinite_or_zero_f16_not_une_zero(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp one half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp one half %x, f0x0000
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -397,8 +397,8 @@ define i1 @negated_isfinite_and_zero_f16(half %x) {
; CHECK-NEXT: ret i1 true
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = or i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -410,8 +410,8 @@ define i1 @negated_isfinite_or_zero_f16_swapped_constants(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpzero = fcmp one half %fabs, 0xH0000
- %cmpinf = fcmp une half %x, 0xH7C00
+ %cmpzero = fcmp one half %fabs, f0x0000
+ %cmpinf = fcmp une half %x, f0x7C00
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -420,16 +420,16 @@ define i1 @negated_isfinite_or_zero_f16_swapped_constants(half %x) {
define i1 @negated_isfinite_or_zero_f16_multi_use_cmp0(half %x, ptr %ptr) {
; CHECK-LABEL: @negated_isfinite_or_zero_f16_multi_use_cmp0(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], 0xH7C00
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], f0x7C00
; CHECK-NEXT: store i1 [[CMPINF]], ptr [[PTR:%.*]], align 1
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X]], f0x0000
; CHECK-NEXT: [[NOT_CLASS:%.*]] = and i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
+ %cmpinf = fcmp one half %fabs, f0x7C00
store i1 %cmpinf, ptr %ptr
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -438,15 +438,15 @@ define i1 @negated_isfinite_or_zero_f16_multi_use_cmp0(half %x, ptr %ptr) {
define i1 @negated_isfinite_or_zero_f16_multi_use_cmp1(half %x, ptr %ptr) {
; CHECK-LABEL: @negated_isfinite_or_zero_f16_multi_use_cmp1(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X]], f0x0000
; CHECK-NEXT: store i1 [[CMPZERO]], ptr [[PTR:%.*]], align 1
; CHECK-NEXT: [[NOT_CLASS:%.*]] = and i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
store i1 %cmpzero, ptr %ptr
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
@@ -459,8 +459,8 @@ define i1 @negated_isfinite_or_zero_f16_multi_use_cmp0_not_one_inf(half %x) {
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp une half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp une half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -469,14 +469,14 @@ define i1 @negated_isfinite_or_zero_f16_multi_use_cmp0_not_one_inf(half %x) {
define i1 @negated_isfinite_or_zero_f16_fabs_wrong_value(half %x, half %y) {
; CHECK-LABEL: @negated_isfinite_or_zero_f16_fabs_wrong_value(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[Y:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp one half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NOT_CLASS:%.*]] = and i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[NOT_CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %y)
- %cmpinf = fcmp one half %fabs, 0xH7C00
- %cmpzero = fcmp une half %x, 0xH0000
+ %cmpinf = fcmp one half %fabs, f0x7C00
+ %cmpzero = fcmp une half %x, f0x0000
%not.class = and i1 %cmpzero, %cmpinf
ret i1 %not.class
}
@@ -490,7 +490,7 @@ define i1 @fcmp_une_0_or_fcmp_une_inf(half %x) {
; CHECK-NEXT: ret i1 true
;
%cmpzero = fcmp une half %x, 0.0
- %cmpinf = fcmp une half %x, 0xH7C00
+ %cmpinf = fcmp une half %x, f0x7C00
%or = or i1 %cmpzero, %cmpinf
ret i1 %or
}
@@ -502,7 +502,7 @@ define i1 @fcmp_one_0_and_fcmp_une_fabs_inf(half %x) {
;
%fabs = call half @llvm.fabs.f16(half %x)
%cmpzero = fcmp one half %x, 0.0
- %cmpinf = fcmp une half %fabs, 0xH7C00
+ %cmpinf = fcmp une half %fabs, f0x7C00
%and = and i1 %cmpzero, %cmpinf
ret i1 %and
}
@@ -514,7 +514,7 @@ define i1 @fcmp_une_0_and_fcmp_une_fabs_inf(half %x) {
;
%fabs = call half @llvm.fabs.f16(half %x)
%cmpzero = fcmp une half %x, 0.0
- %cmpinf = fcmp une half %fabs, 0xH7C00
+ %cmpinf = fcmp une half %fabs, f0x7C00
%and = and i1 %cmpzero, %cmpinf
ret i1 %and
}
@@ -524,7 +524,7 @@ define i1 @fcmp_une_0_and_fcmp_une_neginf(half %x) {
; CHECK-NEXT: ret i1 true
;
%cmpzero = fcmp une half %x, 0.0
- %cmpinf = fcmp une half %x, 0xHFC00
+ %cmpinf = fcmp une half %x, f0xFC00
%or = or i1 %cmpzero, %cmpinf
ret i1 %or
}
@@ -535,8 +535,8 @@ define i1 @issubnormal_or_inf(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -547,8 +547,8 @@ define i1 @olt_smallest_normal_or_inf(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400 ; missing fabs
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400 ; missing fabs
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -559,8 +559,8 @@ define i1 @not_issubnormal_or_inf(half %x) {
; CHECK-NEXT: ret i1 [[NOT]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp une half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp uge half %fabs, 0xH0400
+ %cmpinf = fcmp une half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp uge half %fabs, f0x0400
%not = and i1 %cmp.smallest.normal, %cmpinf
ret i1 %not
}
@@ -571,8 +571,8 @@ define i1 @issubnormal_uge_or_inf(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp uge half %fabs, 0xH0400
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp uge half %fabs, f0x0400
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -581,14 +581,14 @@ define i1 @issubnormal_uge_or_inf(half %x) {
define i1 @issubnormal_or_inf_wrong_val(half %x) {
; CHECK-LABEL: @issubnormal_or_inf_wrong_val(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp olt half [[FABS]], 0xH0401
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp olt half [[FABS]], f0x0401
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_SMALLEST_NORMAL]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0401
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0401
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -596,12 +596,12 @@ define i1 @issubnormal_or_inf_wrong_val(half %x) {
define i1 @issubnormal_or_inf_neg_smallest_normal(half %x) {
; CHECK-LABEL: @issubnormal_or_inf_neg_smallest_normal(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
; CHECK-NEXT: ret i1 [[CMPINF]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH8400
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x8400
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -609,15 +609,15 @@ define i1 @issubnormal_or_inf_neg_smallest_normal(half %x) {
define i1 @fneg_fabs_olt_neg_smallest_normal_or_inf(half %x) {
; CHECK-LABEL: @fneg_fabs_olt_neg_smallest_normal_or_inf(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp ogt half [[FABS]], 0xH0400
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp ogt half [[FABS]], f0x0400
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_SMALLEST_NORMAL]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
%fneg.fabs = fneg half %fabs
- %cmp.smallest.normal = fcmp olt half %fneg.fabs, 0xH8400
+ %cmp.smallest.normal = fcmp olt half %fneg.fabs, f0x8400
%class = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %class
}
@@ -625,12 +625,12 @@ define i1 @fneg_fabs_olt_neg_smallest_normal_or_inf(half %x) {
define i1 @issubnormal_or_finite_olt(half %x) {
; CHECK-LABEL: @issubnormal_or_finite_olt(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp one half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp olt half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmpinf = fcmp olt half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %or
}
@@ -642,8 +642,8 @@ define i1 @issubnormal_or_finite_uge(half %x) {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp uge half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmpinf = fcmp uge half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %or
}
@@ -654,20 +654,20 @@ define i1 @issubnormal_and_finite_olt(half %x) {
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp olt half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmpinf = fcmp olt half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%and = and i1 %cmp.smallest.normal, %cmpinf
ret i1 %and
}
define i1 @not_zero_and_subnormal(half %x) {
; CHECK-LABEL: @not_zero_and_subnormal(
-; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
%cmp.zero = fcmp one half %fabs, 0.0
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmp.zero
ret i1 %or
}
@@ -678,8 +678,8 @@ define i1 @fcmp_fabs_uge_inf_or_fabs_uge_smallest_norm(half %x) {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp uge half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp uge half %fabs, 0xH0400
+ %cmpinf = fcmp uge half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp uge half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmpinf
ret i1 %or
}
@@ -691,11 +691,11 @@ define i1 @fcmp_fabs_uge_inf_or_fabs_uge_smallest_norm(half %x) {
define i1 @is_finite_and_ord(half %x) {
; CHECK-LABEL: @is_finite_and_ord(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[FABS]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp ueq half %fabs, 0xH7C00
+ %is.finite = fcmp ueq half %fabs, f0x7C00
%ord = fcmp ord half %x, %x
%and = and i1 %ord, %is.finite
ret i1 %and
@@ -703,11 +703,11 @@ define i1 @is_finite_and_ord(half %x) {
define i1 @is_finite_and_uno(half %x) {
; CHECK-LABEL: @is_finite_and_uno(
-; CHECK-NEXT: [[AND:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[AND:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp ueq half %fabs, 0xH7C00
+ %is.finite = fcmp ueq half %fabs, f0x7C00
%uno = fcmp uno half %x, %x
%and = and i1 %uno, %is.finite
ret i1 %and
@@ -718,7 +718,7 @@ define i1 @is_finite_or_ord(half %x) {
; CHECK-NEXT: ret i1 true
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp ueq half %fabs, 0xH7C00
+ %is.finite = fcmp ueq half %fabs, f0x7C00
%ord = fcmp ord half %x, %x
%or = or i1 %ord, %is.finite
ret i1 %or
@@ -727,11 +727,11 @@ define i1 @is_finite_or_ord(half %x) {
define i1 @is_finite_or_uno(half %x) {
; CHECK-LABEL: @is_finite_or_uno(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp ueq half %fabs, 0xH7C00
+ %is.finite = fcmp ueq half %fabs, f0x7C00
%uno = fcmp uno half %x, %x
%or = or i1 %uno, %is.finite
ret i1 %or
@@ -740,24 +740,24 @@ define i1 @is_finite_or_uno(half %x) {
define i1 @oeq_isinf_or_uno(half %x) {
; CHECK-LABEL: @oeq_isinf_or_uno(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ueq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ueq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%class = or i1 %cmpinf, %uno
ret i1 %class
}
define i1 @oeq_isinf_or_ord(half %x) {
; CHECK-LABEL: @oeq_isinf_or_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %uno = fcmp ord half %x, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %uno = fcmp ord half %x, f0x0000
%class = or i1 %cmpinf, %uno
ret i1 %class
}
@@ -767,8 +767,8 @@ define i1 @oeq_isinf_and_uno(half %x) {
; CHECK-NEXT: ret i1 false
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%and = and i1 %cmpinf, %uno
ret i1 %and
}
@@ -776,12 +776,12 @@ define i1 @oeq_isinf_and_uno(half %x) {
define i1 @oeq_isinf_and_ord(half %x) {
; CHECK-LABEL: @oeq_isinf_and_ord(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp oeq half %fabs, 0xH7C00
- %uno = fcmp ord half %x, 0xH0000
+ %cmpinf = fcmp oeq half %fabs, f0x7C00
+ %uno = fcmp ord half %x, f0x0000
%and = and i1 %cmpinf, %uno
ret i1 %and
}
@@ -797,13 +797,13 @@ define i1 @isnormal_or_zero(half %x) #0 {
; CHECK-NEXT: ret i1 [[AND1]]
;
entry:
- %iseq = fcmp ord half %x, 0xH0000
+ %iseq = fcmp ord half %x, f0x0000
%fabs = tail call half @llvm.fabs.f16(half %x)
- %isinf = fcmp ult half %fabs, 0xH7C00
- %isnormal = fcmp uge half %fabs, 0xH0400
+ %isinf = fcmp ult half %fabs, f0x7C00
+ %isnormal = fcmp uge half %fabs, f0x0400
%and = and i1 %iseq, %isinf
%and1 = and i1 %isnormal, %and
- %cmp = fcmp oeq half %x, 0xH0000
+ %cmp = fcmp oeq half %x, f0x0000
%spec.select = or i1 %cmp, %and1
ret i1 %spec.select
}
@@ -816,8 +816,8 @@ define i1 @isnormal_uge_or_zero_oeq(half %x) #0 {
;
entry:
%fabs = tail call half @llvm.fabs.f16(half %x)
- %is.normal = fcmp uge half %fabs, 0xH0400
- %is.zero = fcmp oeq half %x, 0xH0000
+ %is.normal = fcmp uge half %fabs, f0x0400
+ %is.zero = fcmp oeq half %x, f0x0000
%or = or i1 %is.normal, %is.zero
ret i1 %or
}
@@ -829,12 +829,12 @@ entry:
; -> ord
define i1 @isnormalinf_or_ord(half %x) #0 {
; CHECK-LABEL: @isnormalinf_or_ord(
-; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.ord = fcmp ord half %x, 0xH0000
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.ord = fcmp ord half %x, f0x0000
%or = or i1 %is.normal.inf, %is.ord
ret i1 %or
}
@@ -842,12 +842,12 @@ define i1 @isnormalinf_or_ord(half %x) #0 {
; -> ord
define i1 @ord_or_isnormalinf(half %x) #0 {
; CHECK-LABEL: @ord_or_isnormalinf(
-; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.ord = fcmp ord half %x, 0xH0000
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.ord = fcmp ord half %x, f0x0000
%or = or i1 %is.ord, %is.normal.inf
ret i1 %or
}
@@ -856,11 +856,11 @@ define i1 @ord_or_isnormalinf(half %x) #0 {
; -> iszero
define i1 @une_or_oge_smallest_normal(half %x) #0 {
; CHECK-LABEL: @une_or_oge_smallest_normal(
-; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[OR]]
;
- %is.normal.inf = fcmp oge half %x, 0xH0400
- %is.une = fcmp une half %x, 0xH0000
+ %is.normal.inf = fcmp oge half %x, f0x0400
+ %is.une = fcmp une half %x, f0x0000
%or = or i1 %is.une, %is.normal.inf
ret i1 %or
}
@@ -872,8 +872,8 @@ define i1 @isnormalinf_or_inf(half %x) #0 {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.inf = fcmp oeq half %fabs, 0xH7C00
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.inf = fcmp oeq half %fabs, f0x7C00
%or = or i1 %is.normal.inf, %is.inf
ret i1 %or
}
@@ -885,8 +885,8 @@ define i1 @posisnormalinf_or_posinf(half %x) #0 {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.pos.normal.inf = fcmp oge half %x, 0xH0400
- %is.inf = fcmp oeq half %fabs, 0xH7C00
+ %is.pos.normal.inf = fcmp oge half %x, f0x0400
+ %is.inf = fcmp oeq half %fabs, f0x7C00
%or = or i1 %is.pos.normal.inf, %is.inf
ret i1 %or
}
@@ -898,8 +898,8 @@ define i1 @isnormalinf_or_posinf(half %x) #0 {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.pos.inf = fcmp oeq half %x, 0xH7C00
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.pos.inf = fcmp oeq half %x, f0x7C00
%or = or i1 %is.normal.inf, %is.pos.inf
ret i1 %or
}
@@ -908,12 +908,12 @@ define i1 @isnormalinf_or_posinf(half %x) #0 {
define i1 @isnormalinf_and_inf(half %x) #0 {
; CHECK-LABEL: @isnormalinf_and_inf(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.inf = fcmp oeq half %fabs, 0xH7C00
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.inf = fcmp oeq half %fabs, f0x7C00
%and = and i1 %is.normal.inf, %is.inf
ret i1 %and
}
@@ -921,12 +921,12 @@ define i1 @isnormalinf_and_inf(half %x) #0 {
; -> pinf
define i1 @posisnormalinf_and_posinf(half %x) #0 {
; CHECK-LABEL: @posisnormalinf_and_posinf(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.pos.normal.inf = fcmp oge half %x, 0xH0400
- %is.inf = fcmp oeq half %fabs, 0xH7C00
+ %is.pos.normal.inf = fcmp oge half %x, f0x0400
+ %is.inf = fcmp oeq half %fabs, f0x7C00
%and = and i1 %is.pos.normal.inf, %is.inf
ret i1 %and
}
@@ -934,12 +934,12 @@ define i1 @posisnormalinf_and_posinf(half %x) #0 {
; -> pinf
define i1 @isnormalinf_and_posinf(half %x) #0 {
; CHECK-LABEL: @isnormalinf_and_posinf(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
- %is.pos.inf = fcmp oeq half %x, 0xH7C00
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
+ %is.pos.inf = fcmp oeq half %x, f0x7C00
%and = and i1 %is.normal.inf, %is.pos.inf
ret i1 %and
}
@@ -954,8 +954,8 @@ define i1 @not_isnormalinf_or_ord(half %x) #0 {
; CHECK-NEXT: ret i1 true
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.normal.inf = fcmp ult half %fabs, 0xH0400
- %is.ord = fcmp ord half %x, 0xH0000
+ %not.is.normal.inf = fcmp ult half %fabs, f0x0400
+ %is.ord = fcmp ord half %x, f0x0000
%or = or i1 %not.is.normal.inf, %is.ord
ret i1 %or
}
@@ -967,8 +967,8 @@ define i1 @not_isnormalinf_and_ord(half %x) #0 {
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.normal.inf = fcmp ult half %fabs, 0xH0400
- %is.ord = fcmp ord half %x, 0xH0000
+ %not.is.normal.inf = fcmp ult half %fabs, f0x0400
+ %is.ord = fcmp ord half %x, f0x0000
%and = and i1 %not.is.normal.inf, %is.ord
ret i1 %and
}
@@ -977,12 +977,12 @@ define i1 @not_isnormalinf_and_ord(half %x) #0 {
define i1 @not_isnormalinf_or_inf(half %x) #0 {
; CHECK-LABEL: @not_isnormalinf_or_inf(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.normal.inf = fcmp ult half %fabs, 0xH0400
- %is.inf = fcmp olt half %fabs, 0xH7C00
+ %not.is.normal.inf = fcmp ult half %fabs, f0x0400
+ %is.inf = fcmp olt half %fabs, f0x7C00
%or = or i1 %not.is.normal.inf, %is.inf
ret i1 %or
}
@@ -991,11 +991,11 @@ define i1 @not_isnormalinf_or_inf(half %x) #0 {
define i1 @not_isnormalinf_or_uno(half %x) #0 {
; CHECK-LABEL: @not_isnormalinf_or_uno(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp ult half [[FABS]], 0xH0400
+; CHECK-NEXT: [[OR:%.*]] = fcmp ult half [[FABS]], f0x0400
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.normal.inf = fcmp ult half %fabs, 0xH0400
+ %not.is.normal.inf = fcmp ult half %fabs, f0x0400
%is.uno = fcmp uno half %fabs, 0.0
%or = or i1 %not.is.normal.inf, %is.uno
ret i1 %or
@@ -1005,11 +1005,11 @@ define i1 @not_isnormalinf_or_uno(half %x) #0 {
define i1 @not_isnormalinf_or_uno_nofabs(half %x) #0 {
; CHECK-LABEL: @not_isnormalinf_or_uno_nofabs(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp ult half [[FABS]], 0xH0400
+; CHECK-NEXT: [[OR:%.*]] = fcmp ult half [[FABS]], f0x0400
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.normal.inf = fcmp ult half %fabs, 0xH0400
+ %not.is.normal.inf = fcmp ult half %fabs, f0x0400
%is.uno = fcmp uno half %x, 0.0
%or = or i1 %not.is.normal.inf, %is.uno
ret i1 %or
@@ -1022,8 +1022,8 @@ define i1 @not_negisnormalinf_or_inf(half %x) #0 {
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %not.is.neg.normal.inf = fcmp ult half %x, 0xH0400
- %is.inf = fcmp oeq half %fabs, 0xH7C00
+ %not.is.neg.normal.inf = fcmp ult half %x, f0x0400
+ %is.inf = fcmp oeq half %fabs, f0x7C00
%or = or i1 %not.is.neg.normal.inf, %is.inf
ret i1 %or
}
@@ -1034,8 +1034,8 @@ define i1 @not_negisnormalinf_or_posinf(half %x) #0 {
; CHECK-NEXT: [[OR:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 767)
; CHECK-NEXT: ret i1 [[OR]]
;
- %not.is.pos.normal.inf = fcmp ult half %x, 0xH0400
- %is.inf = fcmp oeq half %x, 0xH7C00
+ %not.is.pos.normal.inf = fcmp ult half %x, f0x0400
+ %is.inf = fcmp oeq half %x, f0x7C00
%or = or i1 %not.is.pos.normal.inf, %is.inf
ret i1 %or
}
@@ -1046,9 +1046,9 @@ define i1 @not_isposnormalinf_and_isnormalinf(half %x) #0 {
; CHECK-NEXT: [[AND:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 12)
; CHECK-NEXT: ret i1 [[AND]]
;
- %not.is.pos.normal.inf = fcmp ult half %x, 0xH0400
+ %not.is.pos.normal.inf = fcmp ult half %x, f0x0400
%fabs = call half @llvm.fabs.f16(half %x)
- %is.normal.inf = fcmp oge half %fabs, 0xH0400
+ %is.normal.inf = fcmp oge half %fabs, f0x0400
%and = and i1 %not.is.pos.normal.inf, %is.normal.inf
ret i1 %and
}
@@ -1056,11 +1056,11 @@ define i1 @not_isposnormalinf_and_isnormalinf(half %x) #0 {
; -> ord
define i1 @olt_smallest_normal_or_ord(half %x) #0 {
; CHECK-LABEL: @olt_smallest_normal_or_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%ord = fcmp ord half %x, 0.0
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %ord
ret i1 %class
}
@@ -1072,19 +1072,19 @@ define i1 @olt_smallest_normal_or_uno(half %x) #0 {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%uno = fcmp uno half %x, 0.0
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %uno
ret i1 %class
}
define i1 @olt_smallest_normal_or_finite(half %x) #0 {
; CHECK-LABEL: @olt_smallest_normal_or_finite(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.finite = fcmp olt half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %is.finite = fcmp olt half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %is.finite
ret i1 %class
}
@@ -1094,7 +1094,7 @@ define i1 @uge_smallest_normal_or_ord(half %x) #0 {
; CHECK-NEXT: ret i1 true
;
%ord = fcmp ord half %x, 0.0
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %ord
ret i1 %class
}
@@ -1102,11 +1102,11 @@ define i1 @uge_smallest_normal_or_ord(half %x) #0 {
; -> nan | pnormal | pinf
define i1 @uge_smallest_normal_or_uno(half %x) #0 {
; CHECK-LABEL: @uge_smallest_normal_or_uno(
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp uge half [[X:%.*]], 0xH0400
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp uge half [[X:%.*]], f0x0400
; CHECK-NEXT: ret i1 [[CMP_SMALLEST_NORMAL]]
;
%uno = fcmp uno half %x, 0.0
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %uno
ret i1 %class
}
@@ -1114,11 +1114,11 @@ define i1 @uge_smallest_normal_or_uno(half %x) #0 {
; -> uno
define i1 @uge_smallest_normal_and_uno(half %x) #0 {
; CHECK-LABEL: @uge_smallest_normal_and_uno(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%uno = fcmp uno half %x, 0.0
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = and i1 %cmp.smallest.normal, %uno
ret i1 %class
}
@@ -1126,11 +1126,11 @@ define i1 @uge_smallest_normal_and_uno(half %x) #0 {
; -> true
define i1 @olt_infinity_or_finite(half %x) #0 {
; CHECK-LABEL: @olt_infinity_or_finite(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %lt.infinity = fcmp olt half %x, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %cmp.smallest.normal, %lt.infinity
ret i1 %class
}
@@ -1141,8 +1141,8 @@ define i1 @olt_infinity_and_finite(half %x) #0 { ; bustttedddd
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 252)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %lt.infinity = fcmp olt half %x, f0x7C00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = and i1 %cmp.smallest.normal, %lt.infinity
ret i1 %class
}
@@ -1150,11 +1150,11 @@ define i1 @olt_infinity_and_finite(half %x) #0 { ; bustttedddd
; -> ord
define i1 @olt_infinity_or_ord(half %x) #0 {
; CHECK-LABEL: @olt_infinity_or_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
- %ord = fcmp ord half %x, 0xH0400
+ %lt.infinity = fcmp olt half %x, f0x7C00
+ %ord = fcmp ord half %x, f0x0400
%class = or i1 %lt.infinity, %ord
ret i1 %class
}
@@ -1162,23 +1162,23 @@ define i1 @olt_infinity_or_ord(half %x) #0 {
; -> ~posinf
define i1 @olt_infinity_or_uno(half %x) #0 {
; CHECK-LABEL: @olt_infinity_or_uno(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
- %uno = fcmp uno half %x, 0xH0400
+ %lt.infinity = fcmp olt half %x, f0x7C00
+ %uno = fcmp uno half %x, f0x0400
%class = or i1 %lt.infinity, %uno
ret i1 %class
}
define i1 @olt_infinity_or_subnormal(half %x) #0 {
; CHECK-LABEL: @olt_infinity_or_subnormal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
+ %lt.infinity = fcmp olt half %x, f0x7C00
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%class = or i1 %lt.infinity, %is.subnormal
ret i1 %class
}
@@ -1188,9 +1188,9 @@ define i1 @olt_infinity_and_subnormal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 240)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
+ %lt.infinity = fcmp olt half %x, f0x7C00
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%class = and i1 %lt.infinity, %is.subnormal
ret i1 %class
}
@@ -1200,9 +1200,9 @@ define i1 @olt_infinity_and_not_subnormal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 268)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
+ %lt.infinity = fcmp olt half %x, f0x7C00
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
+ %is.subnormal = fcmp olt half %fabs, f0x0400
%not.subnormal = xor i1 %is.subnormal, true
%class = and i1 %lt.infinity, %not.subnormal
ret i1 %class
@@ -1211,12 +1211,12 @@ define i1 @olt_infinity_and_not_subnormal(half %x) #0 {
; -> ninf
define i1 @olt_infinity_and_ueq_inf(half %x) #0 {
; CHECK-LABEL: @olt_infinity_and_ueq_inf(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
+ %lt.infinity = fcmp olt half %x, f0x7C00
%fabs = call half @llvm.fabs.f16(half %x)
- %eq.inf = fcmp ueq half %fabs, 0xH7C00
+ %eq.inf = fcmp ueq half %fabs, f0x7C00
%class = and i1 %lt.infinity, %eq.inf
ret i1 %class
}
@@ -1226,8 +1226,8 @@ define i1 @olt_infinity_or_ueq_inf(half %x) #0 {
; CHECK-LABEL: @olt_infinity_or_ueq_inf(
; CHECK-NEXT: ret i1 true
;
- %lt.infinity = fcmp olt half %x, 0xH7C00
- %eq.inf = fcmp ueq half %x, 0xH7C00
+ %lt.infinity = fcmp olt half %x, f0x7C00
+ %eq.inf = fcmp ueq half %x, f0x7C00
%class = or i1 %lt.infinity, %eq.inf
ret i1 %class
}
@@ -1238,8 +1238,8 @@ define i1 @olt_smallest_normal_or_ueq_inf(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 767)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.normal = fcmp olt half %x, 0xH0400
- %eq.inf = fcmp ueq half %x, 0xH7C00
+ %lt.normal = fcmp olt half %x, f0x0400
+ %eq.inf = fcmp ueq half %x, f0x7C00
%class = or i1 %lt.normal, %eq.inf
ret i1 %class
}
@@ -1247,11 +1247,11 @@ define i1 @olt_smallest_normal_or_ueq_inf(half %x) #0 {
; -> ~pinf
define i1 @olt_smallest_normal_or_une_inf(half %x) #0 {
; CHECK-LABEL: @olt_smallest_normal_or_une_inf(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.normal = fcmp olt half %x, 0xH0400
- %eq.inf = fcmp une half %x, 0xH7C00
+ %lt.normal = fcmp olt half %x, f0x0400
+ %eq.inf = fcmp une half %x, f0x7C00
%class = or i1 %lt.normal, %eq.inf
ret i1 %class
}
@@ -1262,8 +1262,8 @@ define i1 @olt_smallest_normal_and_une_inf(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 252)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.normal = fcmp olt half %x, 0xH0400
- %eq.inf = fcmp une half %x, 0xH7C00
+ %lt.normal = fcmp olt half %x, f0x0400
+ %eq.inf = fcmp une half %x, f0x7C00
%class = and i1 %lt.normal, %eq.inf
ret i1 %class
}
@@ -1273,10 +1273,10 @@ define i1 @olt_smallest_normal_and_une_inf_or_oeq_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 252)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.normal = fcmp olt half %x, 0xH0400
- %eq.inf = fcmp une half %x, 0xH7C00
+ %lt.normal = fcmp olt half %x, f0x0400
+ %eq.inf = fcmp une half %x, f0x7C00
%class = and i1 %lt.normal, %eq.inf
- %eq.normal = fcmp oeq half %x, 0xH0400
+ %eq.normal = fcmp oeq half %x, f0x0400
%eq.largest.normal = or i1 %eq.normal, %class
ret i1 %class
}
@@ -1286,10 +1286,10 @@ define i1 @olt_smallest_normal_and_une_inf_or_one_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 252)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %lt.normal = fcmp olt half %x, 0xH0400
- %eq.inf = fcmp une half %x, 0xH7C00
+ %lt.normal = fcmp olt half %x, f0x0400
+ %eq.inf = fcmp une half %x, f0x7C00
%class = and i1 %lt.normal, %eq.inf
- %ne.normal = fcmp one half %x, 0xH0400
+ %ne.normal = fcmp one half %x, f0x0400
%eq.largest.normal = or i1 %ne.normal, %class
ret i1 %class
}
@@ -1297,23 +1297,23 @@ define i1 @olt_smallest_normal_and_une_inf_or_one_smallest_normal(half %x) #0 {
define i1 @oge_fabs_eq_inf_and_ord(half %x) #0 {
; CHECK-LABEL: @oge_fabs_eq_inf_and_ord(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %oge.fabs.inf = fcmp oge half %fabs, 0xH7C00
- %ord = fcmp ord half %x, 0xH0000
+ %oge.fabs.inf = fcmp oge half %fabs, f0x7C00
+ %ord = fcmp ord half %x, f0x0000
%and = and i1 %oge.fabs.inf, %ord
ret i1 %and
}
define i1 @oge_eq_inf_and_ord(half %x) #0 {
; CHECK-LABEL: @oge_eq_inf_and_ord(
-; CHECK-NEXT: [[OGE_FABS_INF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[OGE_FABS_INF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[OGE_FABS_INF]]
;
- %oge.fabs.inf = fcmp oge half %x, 0xH7C00
- %ord = fcmp ord half %x, 0xH0000
+ %oge.fabs.inf = fcmp oge half %x, f0x7C00
+ %ord = fcmp ord half %x, f0x0000
%and = and i1 %oge.fabs.inf, %ord
ret i1 %and
}
@@ -1321,23 +1321,23 @@ define i1 @oge_eq_inf_and_ord(half %x) #0 {
define i1 @oge_fabs_eq_inf_or_uno(half %x) #0 {
; CHECK-LABEL: @oge_fabs_eq_inf_or_uno(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %oge.fabs.inf = fcmp oge half %fabs, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %oge.fabs.inf = fcmp oge half %fabs, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%or = or i1 %oge.fabs.inf, %uno
ret i1 %or
}
define i1 @oge_eq_inf_or_uno(half %x) #0 {
; CHECK-LABEL: @oge_eq_inf_or_uno(
-; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp ueq half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
- %oge.fabs.inf = fcmp oge half %x, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %oge.fabs.inf = fcmp oge half %x, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%or = or i1 %oge.fabs.inf, %uno
ret i1 %or
}
@@ -1345,23 +1345,23 @@ define i1 @oge_eq_inf_or_uno(half %x) #0 {
define i1 @ult_fabs_eq_inf_and_ord(half %x) #0 {
; CHECK-LABEL: @ult_fabs_eq_inf_and_ord(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[FABS]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %ult.fabs.inf = fcmp ult half %fabs, 0xH7C00
- %ord = fcmp ord half %x, 0xH0000
+ %ult.fabs.inf = fcmp ult half %fabs, f0x7C00
+ %ord = fcmp ord half %x, f0x0000
%and = and i1 %ult.fabs.inf, %ord
ret i1 %and
}
define i1 @ult_eq_inf_and_ord(half %x) #0 {
; CHECK-LABEL: @ult_eq_inf_and_ord(
-; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[AND:%.*]] = fcmp one half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[AND]]
;
- %ult.fabs.inf = fcmp ult half %x, 0xH7C00
- %ord = fcmp ord half %x, 0xH0000
+ %ult.fabs.inf = fcmp ult half %x, f0x7C00
+ %ord = fcmp ord half %x, f0x0000
%and = and i1 %ult.fabs.inf, %ord
ret i1 %and
}
@@ -1369,23 +1369,23 @@ define i1 @ult_eq_inf_and_ord(half %x) #0 {
define i1 @ult_fabs_eq_inf_or_uno(half %x) #0 {
; CHECK-LABEL: @ult_fabs_eq_inf_or_uno(
; CHECK-NEXT: [[TMP1:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[TMP1]], 0xH7C00
+; CHECK-NEXT: [[OR:%.*]] = fcmp une half [[TMP1]], f0x7C00
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %ult.fabs.inf = fcmp ult half %fabs, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %ult.fabs.inf = fcmp ult half %fabs, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%or = or i1 %ult.fabs.inf, %uno
ret i1 %or
}
define i1 @ult_eq_inf_or_uno(half %x) #0 {
; CHECK-LABEL: @ult_eq_inf_or_uno(
-; CHECK-NEXT: [[ULT_FABS_INF:%.*]] = fcmp une half [[X:%.*]], 0xH7C00
+; CHECK-NEXT: [[ULT_FABS_INF:%.*]] = fcmp une half [[X:%.*]], f0x7C00
; CHECK-NEXT: ret i1 [[ULT_FABS_INF]]
;
- %ult.fabs.inf = fcmp ult half %x, 0xH7C00
- %uno = fcmp uno half %x, 0xH0000
+ %ult.fabs.inf = fcmp ult half %x, f0x7C00
+ %uno = fcmp uno half %x, f0x0000
%or = or i1 %ult.fabs.inf, %uno
ret i1 %or
}
@@ -1394,13 +1394,13 @@ define i1 @ult_eq_inf_or_uno(half %x) #0 {
; Can't do anything with this
define i1 @oeq_neginfinity_or_oeq_smallest_normal(half %x) #0 {
; CHECK-LABEL: @oeq_neginfinity_or_oeq_smallest_normal(
-; CHECK-NEXT: [[OEQ_NEG_INFINITY:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X]], 0xH0400
+; CHECK-NEXT: [[OEQ_NEG_INFINITY:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X]], f0x0400
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[OEQ_NEG_INFINITY]], [[CMP_SMALLEST_NORMAL]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp oeq half %x, 0xH0400
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp oeq half %x, f0x0400
%class = or i1 %oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1411,8 +1411,8 @@ define i1 @oeq_neginfinity_or_olt_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 252)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1420,11 +1420,11 @@ define i1 @oeq_neginfinity_or_olt_smallest_normal(half %x) #0 {
; -> ninf
define i1 @oeq_neginfinity_and_olt_smallest_normal(half %x) #0 {
; CHECK-LABEL: @oeq_neginfinity_and_olt_smallest_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = and i1 %oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1435,8 +1435,8 @@ define i1 @oeq_neginfinity_or_oge_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 772)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp oge half %x, 0xH0400
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp oge half %x, f0x0400
%class = or i1 %oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1446,8 +1446,8 @@ define i1 @oeq_neginfinity_and_oge_smallest_normal(half %x) #0 {
; CHECK-LABEL: @oeq_neginfinity_and_oge_smallest_normal(
; CHECK-NEXT: ret i1 false
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp oge half %x, 0xH0400
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp oge half %x, f0x0400
%class = and i1 %oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1455,10 +1455,10 @@ define i1 @oeq_neginfinity_and_oge_smallest_normal(half %x) #0 {
; -> ord
define i1 @oeq_neginfinity_or_ord(half %x) #0 {
; CHECK-LABEL: @oeq_neginfinity_or_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = or i1 %oeq.neg.infinity, %ord
ret i1 %class
@@ -1467,10 +1467,10 @@ define i1 @oeq_neginfinity_or_ord(half %x) #0 {
; -> ninf
define i1 @oeq_neginfinity_and_ord(half %x) #0 {
; CHECK-LABEL: @oeq_neginfinity_and_ord(
-; CHECK-NEXT: [[OEQ_NEG_INFINITY:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[OEQ_NEG_INFINITY:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[OEQ_NEG_INFINITY]]
;
- %oeq.neg.infinity = fcmp oeq half %x, 0xHFC00
+ %oeq.neg.infinity = fcmp oeq half %x, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = and i1 %oeq.neg.infinity, %ord
ret i1 %class
@@ -1479,13 +1479,13 @@ define i1 @oeq_neginfinity_and_ord(half %x) #0 {
; can't do anything with this
define i1 @une_neginfinity_or_oeq_smallest_normal(half %x) #0 {
; CHECK-LABEL: @une_neginfinity_or_oeq_smallest_normal(
-; CHECK-NEXT: [[UNE_NEG_INFINITY:%.*]] = fcmp une half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X]], 0xH0400
+; CHECK-NEXT: [[UNE_NEG_INFINITY:%.*]] = fcmp une half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X]], f0x0400
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[UNE_NEG_INFINITY]], [[CMP_SMALLEST_NORMAL]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %une.neg.infinity = fcmp une half %x, 0xHFC00
- %cmp.smallest.normal = fcmp oeq half %x, 0xH0400
+ %une.neg.infinity = fcmp une half %x, f0xFC00
+ %cmp.smallest.normal = fcmp oeq half %x, f0x0400
%class = or i1 %une.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1495,7 +1495,7 @@ define i1 @une_neginfinity_or_ord(half %x) #0 {
; CHECK-LABEL: @une_neginfinity_or_ord(
; CHECK-NEXT: ret i1 true
;
- %une.neg.infinity = fcmp une half %x, 0xHFC00
+ %une.neg.infinity = fcmp une half %x, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = or i1 %une.neg.infinity, %ord
ret i1 %class
@@ -1504,10 +1504,10 @@ define i1 @une_neginfinity_or_ord(half %x) #0 {
; -> ~(nan | ninf)
define i1 @une_neginfinity_and_ord(half %x) #0 {
; CHECK-LABEL: @une_neginfinity_and_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp one half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %une.neg.infinity = fcmp une half %x, 0xHFC00
+ %une.neg.infinity = fcmp une half %x, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = and i1 %une.neg.infinity, %ord
ret i1 %class
@@ -1516,11 +1516,11 @@ define i1 @une_neginfinity_and_ord(half %x) #0 {
; -> ord
define i1 @one_neginfinity_or_olt_smallest_normal(half %x) #0 {
; CHECK-LABEL: @one_neginfinity_or_olt_smallest_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %one.neg.infinity = fcmp one half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %one.neg.infinity = fcmp one half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %one.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1531,8 +1531,8 @@ define i1 @one_neginfinity_and_olt_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 248)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %one.neg.infinity = fcmp one half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %one.neg.infinity = fcmp one half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = and i1 %one.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1540,10 +1540,10 @@ define i1 @one_neginfinity_and_olt_smallest_normal(half %x) #0 {
; -> ~ninf
define i1 @one_neginfinity_or_uno(half %x) #0 {
; CHECK-LABEL: @one_neginfinity_or_uno(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp une half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %one.neg.infinity = fcmp one half %x, 0xHFC00
+ %one.neg.infinity = fcmp one half %x, f0xFC00
%uno = fcmp uno half %x, 0.0
%class = or i1 %one.neg.infinity, %uno
ret i1 %class
@@ -1554,7 +1554,7 @@ define i1 @one_neginfinity_and_ord(half %x) #0 {
; CHECK-LABEL: @one_neginfinity_and_ord(
; CHECK-NEXT: ret i1 false
;
- %one.neg.infinity = fcmp one half %x, 0xHFC00
+ %one.neg.infinity = fcmp one half %x, f0xFC00
%ord = fcmp uno half %x, 0.0
%class = and i1 %one.neg.infinity, %ord
ret i1 %class
@@ -1566,8 +1566,8 @@ define i1 @one_neginfinity_and_uge_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 768)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %one.neg.infinity = fcmp one half %x, 0xHFC00
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %one.neg.infinity = fcmp one half %x, f0xFC00
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = and i1 %one.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1578,8 +1578,8 @@ define i1 @ueq_neginfinity_or_olt_smallest_normal(half %x) #0 {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 255)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %ueq.neg.infinity = fcmp ueq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %ueq.neg.infinity = fcmp ueq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %ueq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1587,11 +1587,11 @@ define i1 @ueq_neginfinity_or_olt_smallest_normal(half %x) #0 {
; -> ninf
define i1 @ueq_neginfinity_and_olt_smallest_normal(half %x) #0 {
; CHECK-LABEL: @ueq_neginfinity_and_olt_smallest_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %ueq.neg.infinity = fcmp ueq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %ueq.neg.infinity = fcmp ueq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = and i1 %ueq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1599,10 +1599,10 @@ define i1 @ueq_neginfinity_and_olt_smallest_normal(half %x) #0 {
; -> nan|ninf
define i1 @ueq_neginfinity_or_uno(half %x) #0 {
; CHECK-LABEL: @ueq_neginfinity_or_uno(
-; CHECK-NEXT: [[UEQ_NEG_INFINITY:%.*]] = fcmp ueq half [[X:%.*]], 0xHFC00
+; CHECK-NEXT: [[UEQ_NEG_INFINITY:%.*]] = fcmp ueq half [[X:%.*]], f0xFC00
; CHECK-NEXT: ret i1 [[UEQ_NEG_INFINITY]]
;
- %ueq.neg.infinity = fcmp ueq half %x, 0xHFC00
+ %ueq.neg.infinity = fcmp ueq half %x, f0xFC00
%uno = fcmp uno half %x, 0.0
%class = or i1 %ueq.neg.infinity, %uno
ret i1 %class
@@ -1611,10 +1611,10 @@ define i1 @ueq_neginfinity_or_uno(half %x) #0 {
; -> nan|ninf
define i1 @ueq_neginfinity_and_ord(half %x) #0 {
; CHECK-LABEL: @ueq_neginfinity_and_ord(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %ueq.neg.infinity = fcmp ueq half %x, 0xHFC00
+ %ueq.neg.infinity = fcmp ueq half %x, f0xFC00
%ord = fcmp uno half %x, 0.0
%class = and i1 %ueq.neg.infinity, %ord
ret i1 %class
@@ -1623,11 +1623,11 @@ define i1 @ueq_neginfinity_and_ord(half %x) #0 {
; -> uno
define i1 @ueq_neginfinity_and_uge_smallest_normal(half %x) #0 {
; CHECK-LABEL: @ueq_neginfinity_and_uge_smallest_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %ueq.neg.infinity = fcmp ueq half %x, 0xHFC00
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %ueq.neg.infinity = fcmp ueq half %x, f0xFC00
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = and i1 %ueq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1635,11 +1635,11 @@ define i1 @ueq_neginfinity_and_uge_smallest_normal(half %x) #0 {
; -> ord
define i1 @fabs_oeq_neginfinity_or_ord(half %x) #0 {
; CHECK-LABEL: @fabs_oeq_neginfinity_or_ord(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[ORD]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.oeq.neg.infinity = fcmp oeq half %fabs, 0xHFC00
+ %fabs.oeq.neg.infinity = fcmp oeq half %fabs, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = or i1 %fabs.oeq.neg.infinity, %ord
ret i1 %class
@@ -1651,7 +1651,7 @@ define i1 @fabs_une_neginfinity_or_ord(half %x) #0 {
; CHECK-NEXT: ret i1 true
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.une.neg.infinity = fcmp une half %fabs, 0xHFC00
+ %fabs.une.neg.infinity = fcmp une half %fabs, f0xFC00
%ord = fcmp une half %x, 0.0
%class = or i1 %fabs.une.neg.infinity, %ord
ret i1 %class
@@ -1660,11 +1660,11 @@ define i1 @fabs_une_neginfinity_or_ord(half %x) #0 {
; -> une
define i1 @fabs_une_neginfinity_and_ord(half %x) #0 {
; CHECK-LABEL: @fabs_une_neginfinity_and_ord(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[ORD]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.une.neg.infinity = fcmp une half %fabs, 0xHFC00
+ %fabs.une.neg.infinity = fcmp une half %fabs, f0xFC00
%ord = fcmp une half %x, 0.0
%class = and i1 %fabs.une.neg.infinity, %ord
ret i1 %class
@@ -1676,8 +1676,8 @@ define i1 @fabs_oeq_neginfinity_and_uge_smallest_normal(half %x) #0 {
; CHECK-NEXT: ret i1 false
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.oeq.neg.infinity = fcmp oeq half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp oeq half %x, 0xH0400
+ %fabs.oeq.neg.infinity = fcmp oeq half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp oeq half %x, f0x0400
%class = and i1 %fabs.oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1685,12 +1685,12 @@ define i1 @fabs_oeq_neginfinity_and_uge_smallest_normal(half %x) #0 {
; -> false
define i1 @fabs_oeq_neginfinity_or_uge_smallest_normal(half %x) #0 {
; CHECK-LABEL: @fabs_oeq_neginfinity_or_uge_smallest_normal(
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X:%.*]], 0xH0400
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp oeq half [[X:%.*]], f0x0400
; CHECK-NEXT: ret i1 [[CMP_SMALLEST_NORMAL]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.oeq.neg.infinity = fcmp oeq half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp oeq half %x, 0xH0400
+ %fabs.oeq.neg.infinity = fcmp oeq half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp oeq half %x, f0x0400
%class = or i1 %fabs.oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1701,7 +1701,7 @@ define i1 @fabs_oeq_neginfinity_and_ord(half %x) #0 {
; CHECK-NEXT: ret i1 false
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.oeq.neg.infinity = fcmp oeq half %fabs, 0xHFC00
+ %fabs.oeq.neg.infinity = fcmp oeq half %fabs, f0xFC00
%ord = fcmp ord half %x, 0.0
%class = and i1 %fabs.oeq.neg.infinity, %ord
ret i1 %class
@@ -1713,8 +1713,8 @@ define i1 @fabs_ueq_neginfinity_and_olt_smallest_normal(half %x) #0 { ; WRONG
; CHECK-NEXT: ret i1 false
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.ueq.neg.infinity = fcmp ueq half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %fabs.ueq.neg.infinity = fcmp ueq half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = and i1 %fabs.ueq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1726,8 +1726,8 @@ define i1 @fabs_one_neginfinity_and_uge_smallest_normal(half %x) #0 {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.one.neg.infinity = fcmp one half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp uge half %x, 0xH0400
+ %fabs.one.neg.infinity = fcmp one half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp uge half %x, f0x0400
%class = and i1 %fabs.one.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1735,12 +1735,12 @@ define i1 @fabs_one_neginfinity_and_uge_smallest_normal(half %x) #0 {
; -> ord
define i1 @fabs_one_neginfinity_or_olt_smallest_normal(half %x) #0 {
; CHECK-LABEL: @fabs_one_neginfinity_or_olt_smallest_normal(
-; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CLASS:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.one.neg.infinity = fcmp one half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp olt half %x, 0xH0400
+ %fabs.one.neg.infinity = fcmp one half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp olt half %x, f0x0400
%class = or i1 %fabs.one.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1752,8 +1752,8 @@ define i1 @fabs_ueq_neginfinity_or_fabs_uge_smallest_normal(half %x) #0 {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %fabs.oeq.neg.infinity = fcmp ueq half %fabs, 0xHFC00
- %cmp.smallest.normal = fcmp uge half %fabs, 0xH0400
+ %fabs.oeq.neg.infinity = fcmp ueq half %fabs, f0xFC00
+ %cmp.smallest.normal = fcmp uge half %fabs, f0x0400
%class = or i1 %fabs.oeq.neg.infinity, %cmp.smallest.normal
ret i1 %class
}
@@ -1766,14 +1766,14 @@ define i1 @fabs_ueq_neginfinity_or_fabs_uge_smallest_normal(half %x) #0 {
define i1 @not_isfinite_or_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @not_isfinite_or_zero_f16_daz(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -1781,13 +1781,13 @@ define i1 @not_isfinite_or_zero_f16_daz(half %x) #1 {
define <2 x i1> @not_isfinite_or_zero_v2f16_daz(<2 x half> %x) #1 {
; CHECK-LABEL: @not_isfinite_or_zero_v2f16_daz(
; CHECK-NEXT: [[FABS:%.*]] = call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq <2 x half> [[FABS]], splat (half 0xH7C00)
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq <2 x half> [[FABS]], splat (half f0x7C00)
; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq <2 x half> [[X]], zeroinitializer
; CHECK-NEXT: [[CLASS:%.*]] = or <2 x i1> [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp ueq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp ueq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp oeq <2 x half> %x, zeroinitializer
%class = or <2 x i1> %cmpzero, %cmpinf
ret <2 x i1> %class
@@ -1797,14 +1797,14 @@ define <2 x i1> @not_isfinite_or_zero_v2f16_daz(<2 x half> %x) #1 {
define i1 @not_isfinite_or_zero_f16_dynamic(half %x) #2 {
; CHECK-LABEL: @not_isfinite_or_zero_f16_dynamic(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], 0xH7C00
-; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[FABS]], f0x7C00
+; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp ueq half %fabs, 0xH7C00
- %cmpzero = fcmp oeq half %x, 0xH0000
+ %cmpinf = fcmp ueq half %fabs, f0x7C00
+ %cmpzero = fcmp oeq half %x, f0x0000
%class = or i1 %cmpzero, %cmpinf
ret i1 %class
}
@@ -1812,13 +1812,13 @@ define i1 @not_isfinite_or_zero_f16_dynamic(half %x) #2 {
define <2 x i1> @not_isfinite_or_zero_v2f16_dynamic(<2 x half> %x) #2 {
; CHECK-LABEL: @not_isfinite_or_zero_v2f16_dynamic(
; CHECK-NEXT: [[FABS:%.*]] = call <2 x half> @llvm.fabs.v2f16(<2 x half> [[X:%.*]])
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq <2 x half> [[FABS]], splat (half 0xH7C00)
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq <2 x half> [[FABS]], splat (half f0x7C00)
; CHECK-NEXT: [[CMPZERO:%.*]] = fcmp oeq <2 x half> [[X]], zeroinitializer
; CHECK-NEXT: [[CLASS:%.*]] = or <2 x i1> [[CMPZERO]], [[CMPINF]]
; CHECK-NEXT: ret <2 x i1> [[CLASS]]
;
%fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %x)
- %cmpinf = fcmp ueq <2 x half> %fabs, <half 0xH7C00, half 0xH7C00>
+ %cmpinf = fcmp ueq <2 x half> %fabs, <half f0x7C00, half f0x7C00>
%cmpzero = fcmp oeq <2 x half> %x, zeroinitializer
%class = or <2 x i1> %cmpzero, %cmpinf
ret <2 x i1> %class
@@ -1826,12 +1826,12 @@ define <2 x i1> @not_isfinite_or_zero_v2f16_dynamic(<2 x half> %x) #2 {
define i1 @not_zero_and_subnormal_daz(half %x) #1 {
; CHECK-LABEL: @not_zero_and_subnormal_daz(
-; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[OR:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
%cmp.zero = fcmp one half %fabs, 0.0
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmp.zero
ret i1 %or
}
@@ -1839,39 +1839,39 @@ define i1 @not_zero_and_subnormal_daz(half %x) #1 {
define i1 @not_zero_and_subnormal_dynamic(half %x) #2 {
; CHECK-LABEL: @not_zero_and_subnormal_dynamic(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMP_ZERO:%.*]] = fcmp one half [[X]], 0xH0000
-; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp olt half [[FABS]], 0xH0400
+; CHECK-NEXT: [[CMP_ZERO:%.*]] = fcmp one half [[X]], f0x0000
+; CHECK-NEXT: [[CMP_SMALLEST_NORMAL:%.*]] = fcmp olt half [[FABS]], f0x0400
; CHECK-NEXT: [[OR:%.*]] = or i1 [[CMP_SMALLEST_NORMAL]], [[CMP_ZERO]]
; CHECK-NEXT: ret i1 [[OR]]
;
%fabs = call half @llvm.fabs.f16(half %x)
%cmp.zero = fcmp one half %fabs, 0.0
- %cmp.smallest.normal = fcmp olt half %fabs, 0xH0400
+ %cmp.smallest.normal = fcmp olt half %fabs, f0x0400
%or = or i1 %cmp.smallest.normal, %cmp.zero
ret i1 %or
}
-; TODO: This could fold to just fcmp olt half %fabs, 0xH0400
+; TODO: This could fold to just fcmp olt half %fabs, f0x0400
define i1 @subnormal_or_zero_ieee(half %x) #0 {
; CHECK-LABEL: @subnormal_or_zero_ieee(
; CHECK-NEXT: [[AND:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 240)
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
- %is.zero = fcmp oeq half %x, 0xH0000
+ %is.subnormal = fcmp olt half %fabs, f0x0400
+ %is.zero = fcmp oeq half %x, f0x0000
%and = or i1 %is.subnormal, %is.zero
ret i1 %and
}
define i1 @subnormal_or_zero_daz(half %x) #1 {
; CHECK-LABEL: @subnormal_or_zero_daz(
-; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[AND:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
- %is.zero = fcmp oeq half %x, 0xH0000
+ %is.subnormal = fcmp olt half %fabs, f0x0400
+ %is.zero = fcmp oeq half %x, f0x0000
%and = or i1 %is.subnormal, %is.zero
ret i1 %and
}
@@ -1879,14 +1879,14 @@ define i1 @subnormal_or_zero_daz(half %x) #1 {
define i1 @subnormal_or_zero_dynamic(half %x) #2 {
; CHECK-LABEL: @subnormal_or_zero_dynamic(
; CHECK-NEXT: [[FABS:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], 0xH0400
-; CHECK-NEXT: [[IS_ZERO:%.*]] = fcmp oeq half [[X]], 0xH0000
+; CHECK-NEXT: [[IS_SUBNORMAL:%.*]] = fcmp olt half [[FABS]], f0x0400
+; CHECK-NEXT: [[IS_ZERO:%.*]] = fcmp oeq half [[X]], f0x0000
; CHECK-NEXT: [[AND:%.*]] = or i1 [[IS_SUBNORMAL]], [[IS_ZERO]]
; CHECK-NEXT: ret i1 [[AND]]
;
%fabs = call half @llvm.fabs.f16(half %x)
- %is.subnormal = fcmp olt half %fabs, 0xH0400
- %is.zero = fcmp oeq half %x, 0xH0000
+ %is.subnormal = fcmp olt half %fabs, f0x0400
+ %is.zero = fcmp oeq half %x, f0x0000
%and = or i1 %is.subnormal, %is.zero
ret i1 %and
}
@@ -1897,8 +1897,8 @@ define i1 @issubnormal_or_inf_nnan_logical_select(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call nnan half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp nnan oeq half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp nnan olt half %fabs, 0xH0400
+ %cmpinf = fcmp nnan oeq half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp nnan olt half %fabs, f0x0400
%class = select i1 %cmpinf, i1 true, i1 %cmp.smallest.normal
ret i1 %class
}
@@ -1909,8 +1909,8 @@ define i1 @issubnormal_and_ninf_nnan_logical_select(half %x) {
; CHECK-NEXT: ret i1 [[CLASS]]
;
%fabs = call nnan half @llvm.fabs.f16(half %x)
- %cmpinf = fcmp nnan one half %fabs, 0xH7C00
- %cmp.smallest.normal = fcmp nnan olt half %fabs, 0xH0400
+ %cmpinf = fcmp nnan one half %fabs, f0x7C00
+ %cmp.smallest.normal = fcmp nnan olt half %fabs, f0x0400
%class = select i1 %cmpinf, i1 %cmp.smallest.normal, i1 false
ret i1 %class
}
@@ -1920,8 +1920,8 @@ define i1 @fcmp_ueq_neginf_or_oge_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 999)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xHFC00
- %cmp.oge.zero = fcmp oge half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0xFC00
+ %cmp.oge.zero = fcmp oge half %x, f0x0000
%class = or i1 %cmp.oge.zero, %cmpinf
ret i1 %class
}
@@ -1931,34 +1931,34 @@ define i1 @fcmp_oeq_neginf_or_oge_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 996)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.oge.zero = fcmp oge half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.oge.zero = fcmp oge half %x, f0x0000
%class = or i1 %cmp.oge.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_neginf_or_oge_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_neginf_or_oge_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_OGE_ZERO:%.*]] = fcmp oge half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_OGE_ZERO:%.*]] = fcmp oge half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OGE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xHFC00
- %cmp.oge.zero = fcmp oge half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0xFC00
+ %cmp.oge.zero = fcmp oge half %x, f0x0000
%class = or i1 %cmp.oge.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_neginf_or_oge_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_neginf_or_oge_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_OGE_ZERO:%.*]] = fcmp oge half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_OGE_ZERO:%.*]] = fcmp oge half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OGE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.oge.zero = fcmp oge half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.oge.zero = fcmp oge half %x, f0x0000
%class = or i1 %cmp.oge.zero, %cmpinf
ret i1 %class
}
@@ -1968,8 +1968,8 @@ define i1 @fcmp_oeq_neginf_or_ogt_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 900)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.ogt.zero = fcmp ogt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.ogt.zero = fcmp ogt half %x, f0x0000
%class = or i1 %cmp.ogt.zero, %cmpinf
ret i1 %class
}
@@ -1979,34 +1979,34 @@ define i1 @fcmp_ueq_neginf_or_ogt_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 903)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xHFC00
- %cmp.ogt.zero = fcmp ogt half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0xFC00
+ %cmp.ogt.zero = fcmp ogt half %x, f0x0000
%class = or i1 %cmp.ogt.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_neginf_or_ogt_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_neginf_or_ogt_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_OGT_ZERO:%.*]] = fcmp ogt half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_OGT_ZERO:%.*]] = fcmp ogt half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OGT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xHFC00
- %cmp.ogt.zero = fcmp ogt half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0xFC00
+ %cmp.ogt.zero = fcmp ogt half %x, f0x0000
%class = or i1 %cmp.ogt.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_neginf_or_ogt_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_neginf_or_ogt_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_OGT_ZERO:%.*]] = fcmp ogt half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_OGT_ZERO:%.*]] = fcmp ogt half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OGT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.ogt.zero = fcmp ogt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.ogt.zero = fcmp ogt half %x, f0x0000
%class = or i1 %cmp.ogt.zero, %cmpinf
ret i1 %class
}
@@ -2016,34 +2016,34 @@ define i1 @fcmp_oeq_neginf_or_ugt_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 903)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.ugt.zero = fcmp ugt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.ugt.zero = fcmp ugt half %x, f0x0000
%class = or i1 %cmp.ugt.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_neginf_or_ugt_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_neginf_or_ugt_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_UGT_ZERO:%.*]] = fcmp ugt half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_UGT_ZERO:%.*]] = fcmp ugt half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_UGT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xHFC00
- %cmp.ugt.zero = fcmp ugt half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0xFC00
+ %cmp.ugt.zero = fcmp ugt half %x, f0x0000
%class = or i1 %cmp.ugt.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_neginf_or_ugt_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_neginf_or_ugt_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xHFC00
-; CHECK-NEXT: [[CMP_UGT_ZERO:%.*]] = fcmp ugt half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0xFC00
+; CHECK-NEXT: [[CMP_UGT_ZERO:%.*]] = fcmp ugt half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_UGT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xHFC00
- %cmp.ugt.zero = fcmp ugt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0xFC00
+ %cmp.ugt.zero = fcmp ugt half %x, f0x0000
%class = or i1 %cmp.ugt.zero, %cmpinf
ret i1 %class
}
@@ -2053,8 +2053,8 @@ define i1 @fcmp_ueq_posinf_or_ole_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 639)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ole.zero = fcmp ole half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ole.zero = fcmp ole half %x, f0x0000
%class = or i1 %cmp.ole.zero, %cmpinf
ret i1 %class
}
@@ -2064,34 +2064,34 @@ define i1 @fcmp_oeq_posinf_or_ole_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 636)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.ole.zero = fcmp ole half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.ole.zero = fcmp ole half %x, f0x0000
%class = or i1 %cmp.ole.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_posinf_or_ole_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_posinf_or_ole_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_OLE_ZERO:%.*]] = fcmp ole half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_OLE_ZERO:%.*]] = fcmp ole half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OLE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ole.zero = fcmp ole half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ole.zero = fcmp ole half %x, f0x0000
%class = or i1 %cmp.ole.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_posinf_or_ole_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_posinf_or_ole_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_OLE_ZERO:%.*]] = fcmp ole half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_OLE_ZERO:%.*]] = fcmp ole half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OLE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.ole.zero = fcmp ole half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.ole.zero = fcmp ole half %x, f0x0000
%class = or i1 %cmp.ole.zero, %cmpinf
ret i1 %class
}
@@ -2101,21 +2101,21 @@ define i1 @fcmp_oeq_posinf_or_olt_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 540)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.olt.zero = fcmp olt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.olt.zero = fcmp olt half %x, f0x0000
%class = or i1 %cmp.olt.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_posinf_or_olt_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_posinf_or_olt_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_OLT_ZERO:%.*]] = fcmp olt half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_OLT_ZERO:%.*]] = fcmp olt half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_OLT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.olt.zero = fcmp olt half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.olt.zero = fcmp olt half %x, f0x0000
%class = or i1 %cmp.olt.zero, %cmpinf
ret i1 %class
}
@@ -2125,8 +2125,8 @@ define i1 @fcmp_ueq_posinf_or_ult_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 543)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ult.zero = fcmp ult half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ult.zero = fcmp ult half %x, f0x0000
%class = or i1 %cmp.ult.zero, %cmpinf
ret i1 %class
}
@@ -2136,34 +2136,34 @@ define i1 @fcmp_oeq_posinf_or_ult_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 543)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.ult.zero = fcmp ult half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.ult.zero = fcmp ult half %x, f0x0000
%class = or i1 %cmp.ult.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_posinf_or_ult_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_posinf_or_ult_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_ULT_ZERO:%.*]] = fcmp ult half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_ULT_ZERO:%.*]] = fcmp ult half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_ULT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ult.zero = fcmp ult half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ult.zero = fcmp ult half %x, f0x0000
%class = or i1 %cmp.ult.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_posinf_or_ult_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_posinf_or_ult_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_ULT_ZERO:%.*]] = fcmp ult half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_ULT_ZERO:%.*]] = fcmp ult half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_ULT_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.ult.zero = fcmp ult half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.ult.zero = fcmp ult half %x, f0x0000
%class = or i1 %cmp.ult.zero, %cmpinf
ret i1 %class
}
@@ -2173,34 +2173,34 @@ define i1 @fcmp_ueq_posinf_or_ule_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 639)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ule.zero = fcmp ule half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ule.zero = fcmp ule half %x, f0x0000
%class = or i1 %cmp.ule.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_ueq_posinf_or_ule_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_ueq_posinf_or_ule_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_ULE_ZERO:%.*]] = fcmp ule half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp ueq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_ULE_ZERO:%.*]] = fcmp ule half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_ULE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.ule.zero = fcmp ule half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.ule.zero = fcmp ule half %x, f0x0000
%class = or i1 %cmp.ule.zero, %cmpinf
ret i1 %class
}
define i1 @fcmp_oeq_posinf_or_ule_zero_f16_daz(half %x) #1 {
; CHECK-LABEL: @fcmp_oeq_posinf_or_ule_zero_f16_daz(
-; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], 0xH7C00
-; CHECK-NEXT: [[CMP_ULE_ZERO:%.*]] = fcmp ule half [[X]], 0xH0000
+; CHECK-NEXT: [[CMPINF:%.*]] = fcmp oeq half [[X:%.*]], f0x7C00
+; CHECK-NEXT: [[CMP_ULE_ZERO:%.*]] = fcmp ule half [[X]], f0x0000
; CHECK-NEXT: [[CLASS:%.*]] = or i1 [[CMP_ULE_ZERO]], [[CMPINF]]
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp oeq half %x, 0xH7C00
- %cmp.ule.zero = fcmp ule half %x, 0xH0000
+ %cmpinf = fcmp oeq half %x, f0x7C00
+ %cmp.ule.zero = fcmp ule half %x, f0x0000
%class = or i1 %cmp.ule.zero, %cmpinf
ret i1 %class
}
@@ -2210,8 +2210,8 @@ define i1 @fcmp_ueq_posinf_or_olt_zero_f16(half %x) {
; CHECK-NEXT: [[CLASS:%.*]] = call i1 @llvm.is.fpclass.f16(half [[X:%.*]], i32 543)
; CHECK-NEXT: ret i1 [[CLASS]]
;
- %cmpinf = fcmp ueq half %x, 0xH7C00
- %cmp.olt.zero = fcmp olt half %x, 0xH0000
+ %cmpinf = fcmp ueq half %x, f0x7C00
+ %cmp.olt.zero = fcmp olt half %x, f0x0000
%class = or i1 %cmp.olt.zero, %cmpinf
ret i1 %class
}
diff --git a/llvm/test/Transforms/InstCombine/exp2-1.ll b/llvm/test/Transforms/InstCombine/exp2-1.ll
index 0502540b7f7e92..a64566ba62ce18 100644
--- a/llvm/test/Transforms/InstCombine/exp2-1.ll
+++ b/llvm/test/Transforms/InstCombine/exp2-1.ll
@@ -383,7 +383,7 @@ define float @test_readonly_exp2f_f32_of_sitofp(i32 %x) {
define fp128 @test_readonly_exp2l_fp128_of_sitofp(i32 %x) {
; LDEXP32-LABEL: @test_readonly_exp2l_fp128_of_sitofp(
-; LDEXP32-NEXT: [[LDEXPL:%.*]] = call fp128 @ldexpl(fp128 0xL00000000000000003FFF000000000000, i32 [[X:%.*]])
+; LDEXP32-NEXT: [[LDEXPL:%.*]] = call fp128 @ldexpl(fp128 f0x3FFF0000000000000000000000000000, i32 [[X:%.*]])
; LDEXP32-NEXT: ret fp128 [[LDEXPL]]
;
; LDEXP16-LABEL: @test_readonly_exp2l_fp128_of_sitofp(
diff --git a/llvm/test/Transforms/InstCombine/exp2-to-ldexp.ll b/llvm/test/Transforms/InstCombine/exp2-to-ldexp.ll
index 8a52f79f307ca0..61e59c5e41ec1b 100644
--- a/llvm/test/Transforms/InstCombine/exp2-to-ldexp.ll
+++ b/llvm/test/Transforms/InstCombine/exp2-to-ldexp.ll
@@ -54,7 +54,7 @@ define half @exp2_f16_sitofp_i8(i8 %x) {
; CHECK-LABEL: define half @exp2_f16_sitofp_i8(
; CHECK-SAME: i8 [[X:%.*]]) {
; CHECK-NEXT: [[TMP1:%.*]] = sext i8 [[X]] to i32
-; CHECK-NEXT: [[EXP2:%.*]] = call half @llvm.ldexp.f16.i32(half 0xH3C00, i32 [[TMP1]])
+; CHECK-NEXT: [[EXP2:%.*]] = call half @llvm.ldexp.f16.i32(half f0x3C00, i32 [[TMP1]])
; CHECK-NEXT: ret half [[EXP2]]
;
%itofp = sitofp i8 %x to half
@@ -78,7 +78,7 @@ define fp128 @exp2_fp128_sitofp_i8(i8 %x) {
; CHECK-LABEL: define fp128 @exp2_fp128_sitofp_i8(
; CHECK-SAME: i8 [[X:%.*]]) {
; CHECK-NEXT: [[TMP1:%.*]] = sext i8 [[X]] to i32
-; CHECK-NEXT: [[EXP2:%.*]] = call fp128 @llvm.ldexp.f128.i32(fp128 0xL00000000000000003FFF000000000000, i32 [[TMP1]])
+; CHECK-NEXT: [[EXP2:%.*]] = call fp128 @llvm.ldexp.f128.i32(fp128 f0x3FFF0000000000000000000000000000, i32 [[TMP1]])
; CHECK-NEXT: ret fp128 [[EXP2]]
;
%itofp = sitofp i8 %x to fp128
diff --git a/llvm/test/Transforms/InstCombine/fabs.ll b/llvm/test/Transforms/InstCombine/fabs.ll
index cccf0f4457b6ab..5b0d78cbdcf668 100644
--- a/llvm/test/Transforms/InstCombine/fabs.ll
+++ b/llvm/test/Transforms/InstCombine/fabs.ll
@@ -440,8 +440,8 @@ define half @select_fcmp_nnan_ugt_negzero(half %x) {
define half @select_fcmp_nnan_oge_negzero(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_oge_negzero(
-; CHECK-NEXT: [[GTZERO:%.*]] = fcmp oge half [[X:%.*]], 0xH0000
-; CHECK-NEXT: [[NEGX:%.*]] = fsub nnan half 0xH0000, [[X]]
+; CHECK-NEXT: [[GTZERO:%.*]] = fcmp oge half [[X:%.*]], f0x0000
+; CHECK-NEXT: [[NEGX:%.*]] = fsub nnan half f0x0000, [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select i1 [[GTZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
;
@@ -843,7 +843,7 @@ define <2 x float> @select_fcmp_nnan_nsz_ugt_zero_unary_fneg(<2 x float> %x) {
define half @select_fcmp_nnan_nsz_ogt_negzero(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_ogt_negzero(
-; CHECK-NEXT: [[GTZERO:%.*]] = fcmp ogt half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GTZERO:%.*]] = fcmp ogt half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg fast half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan ninf i1 [[GTZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
@@ -858,7 +858,7 @@ define half @select_fcmp_nnan_nsz_ogt_negzero(half %x) {
define half @select_fcmp_nnan_nsz_ugt_negzero(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_ugt_negzero(
-; CHECK-NEXT: [[GTZERO:%.*]] = fcmp ugt half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GTZERO:%.*]] = fcmp ugt half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg fast half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan ninf i1 [[GTZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
@@ -931,7 +931,7 @@ define <2 x double> @select_fcmp_nnan_nsz_uge_zero_unary_fneg(<2 x double> %x) {
define half @select_fcmp_nnan_nsz_oge_negzero(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_oge_negzero(
-; CHECK-NEXT: [[GEZERO:%.*]] = fcmp oge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GEZERO:%.*]] = fcmp oge half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg nnan nsz half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan i1 [[GEZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
@@ -946,7 +946,7 @@ define half @select_fcmp_nnan_nsz_oge_negzero(half %x) {
define half @select_fcmp_nnan_nsz_uge_negzero(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_uge_negzero(
-; CHECK-NEXT: [[GEZERO:%.*]] = fcmp uge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GEZERO:%.*]] = fcmp uge half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg nnan nsz half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan i1 [[GEZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
@@ -959,7 +959,7 @@ define half @select_fcmp_nnan_nsz_uge_negzero(half %x) {
define half @select_fcmp_nnan_nsz_oge_negzero_unary_fneg(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_oge_negzero_unary_fneg(
-; CHECK-NEXT: [[GEZERO:%.*]] = fcmp oge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GEZERO:%.*]] = fcmp oge half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg nnan nsz half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan i1 [[GEZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
@@ -974,7 +974,7 @@ define half @select_fcmp_nnan_nsz_oge_negzero_unary_fneg(half %x) {
define half @select_fcmp_nnan_nsz_uge_negzero_unary_fneg(half %x) {
; CHECK-LABEL: @select_fcmp_nnan_nsz_uge_negzero_unary_fneg(
-; CHECK-NEXT: [[GEZERO:%.*]] = fcmp uge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[GEZERO:%.*]] = fcmp uge half [[X:%.*]], f0x0000
; CHECK-NEXT: [[NEGX:%.*]] = fneg nnan nsz half [[X]]
; CHECK-NEXT: [[FABS:%.*]] = select nnan i1 [[GEZERO]], half [[X]], half [[NEGX]]
; CHECK-NEXT: ret half [[FABS]]
diff --git a/llvm/test/Transforms/InstCombine/fcmp-denormals-are-zero.ll b/llvm/test/Transforms/InstCombine/fcmp-denormals-are-zero.ll
index eea1dda6230a9d..60093155fb76ec 100644
--- a/llvm/test/Transforms/InstCombine/fcmp-denormals-are-zero.ll
+++ b/llvm/test/Transforms/InstCombine/fcmp-denormals-are-zero.ll
@@ -11,7 +11,7 @@ define void @denormal_input_preserve_sign_fcmp_olt_smallest_normalized(float %f3
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp oeq double [[F64:%.*]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp oeq half [[F16:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp oeq half [[F16:%.*]], f0x0000
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: [[CMPF32_FLAGS:%.*]] = fcmp oeq float [[F32]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF32_FLAGS]], ptr @var, align 1
@@ -26,7 +26,7 @@ define void @denormal_input_preserve_sign_fcmp_olt_smallest_normalized(float %f3
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp olt half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp olt half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
%f32.fabs.flags = call nsz nnan float @llvm.fabs.f32(float %f32)
@@ -44,7 +44,7 @@ define void @denormal_input_preserve_sign_fcmp_uge_smallest_normalized(float %f3
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp une double [[F64:%.*]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp une half [[F16:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp une half [[F16:%.*]], f0x0000
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -57,7 +57,7 @@ define void @denormal_input_preserve_sign_fcmp_uge_smallest_normalized(float %f3
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp uge half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp uge half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -70,7 +70,7 @@ define void @denormal_input_preserve_sign_fcmp_oge_smallest_normalized(float %f3
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp one double [[F64:%.*]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp one half [[F16:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp one half [[F16:%.*]], f0x0000
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -83,7 +83,7 @@ define void @denormal_input_preserve_sign_fcmp_oge_smallest_normalized(float %f3
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp oge half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp oge half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -96,7 +96,7 @@ define void @denormal_input_preserve_sign_fcmp_ult_smallest_normalized(float %f3
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp ueq double [[F64:%.*]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp ueq half [[F16:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp ueq half [[F16:%.*]], f0x0000
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -109,7 +109,7 @@ define void @denormal_input_preserve_sign_fcmp_ult_smallest_normalized(float %f3
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp ult half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp ult half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -133,7 +133,7 @@ define void @denormal_input_preserve_sign_vector_fcmp_olt_smallest_normalized(<2
store volatile <2 x i1> %cmpf64, ptr @var
%f16.fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %f16)
- %cmpf16 = fcmp olt <2 x half> %f16.fabs, <half 0xH0400, half 0xH0400>
+ %cmpf16 = fcmp olt <2 x half> %f16.fabs, <half f0x0400, half f0x0400>
store volatile <2 x i1> %cmpf16, ptr @var
ret void
}
@@ -157,7 +157,7 @@ define void @denormal_input_preserve_sign_vector_fcmp_uge_smallest_normalized(<2
store volatile <2 x i1> %cmpf64, ptr @var
%f16.fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %f16)
- %cmpf16 = fcmp uge <2 x half> %f16.fabs, <half 0xH0400, half 0xH0400>
+ %cmpf16 = fcmp uge <2 x half> %f16.fabs, <half f0x0400, half f0x0400>
store volatile <2 x i1> %cmpf16, ptr @var
ret void
}
@@ -181,7 +181,7 @@ define void @denormal_input_preserve_sign_vector_fcmp_oge_smallest_normalized(<2
store volatile <2 x i1> %cmpf64, ptr @var
%f16.fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %f16)
- %cmpf16 = fcmp oge <2 x half> %f16.fabs, <half 0xH0400, half 0xH0400>
+ %cmpf16 = fcmp oge <2 x half> %f16.fabs, <half f0x0400, half f0x0400>
store volatile <2 x i1> %cmpf16, ptr @var
ret void
}
@@ -205,7 +205,7 @@ define void @denormal_input_preserve_sign_vector_fcmp_ult_smallest_normalized(<2
store volatile <2 x i1> %cmpf64, ptr @var
%f16.fabs = call <2 x half> @llvm.fabs.v2f16(<2 x half> %f16)
- %cmpf16 = fcmp ult <2 x half> %f16.fabs, <half 0xH0400, half 0xH0400>
+ %cmpf16 = fcmp ult <2 x half> %f16.fabs, <half f0x0400, half f0x0400>
store volatile <2 x i1> %cmpf16, ptr @var
ret void
}
@@ -218,7 +218,7 @@ define void @denormal_input_positive_zero_fcmp_olt_smallest_normalized(float %f3
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp oeq double [[F64:%.*]], 0.000000e+00
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp oeq half [[F16:%.*]], 0xH0000
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp oeq half [[F16:%.*]], f0x0000
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -231,7 +231,7 @@ define void @denormal_input_positive_zero_fcmp_olt_smallest_normalized(float %f3
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp olt half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp olt half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -246,7 +246,7 @@ define void @denormal_input_ieee(float %f32, double %f64, half %f16) #2 {
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp olt double [[F64_FABS]], 0x10000000000000
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
; CHECK-NEXT: [[F16_FABS:%.*]] = call half @llvm.fabs.f16(half [[F16:%.*]])
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16_FABS]], 0xH0400
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16_FABS]], f0x0400
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -259,7 +259,7 @@ define void @denormal_input_ieee(float %f32, double %f64, half %f16) #2 {
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp olt half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp olt half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -273,7 +273,7 @@ define void @denormal_input_preserve_sign_f32_only(float %f32, double %f64, half
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp olt double [[F64_FABS]], 0x10000000000000
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
; CHECK-NEXT: [[F16_FABS:%.*]] = call half @llvm.fabs.f16(half [[F16:%.*]])
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16_FABS]], 0xH0400
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16_FABS]], f0x0400
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -286,7 +286,7 @@ define void @denormal_input_preserve_sign_f32_only(float %f32, double %f64, half
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp olt half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp olt half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -300,7 +300,7 @@ define void @wrong_fcmp_type_ole(float %f32, double %f64, half %f16) #0 {
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp ole double [[F64_FABS]], 0x10000000000000
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
; CHECK-NEXT: [[F16_FABS:%.*]] = call half @llvm.fabs.f16(half [[F16:%.*]])
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp ole half [[F16_FABS]], 0xH0400
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp ole half [[F16_FABS]], f0x0400
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -313,7 +313,7 @@ define void @wrong_fcmp_type_ole(float %f32, double %f64, half %f16) #0 {
store volatile i1 %cmpf64, ptr @var
%f16.fabs = call half @llvm.fabs.f16(half %f16)
- %cmpf16 = fcmp ole half %f16.fabs, 0xH0400
+ %cmpf16 = fcmp ole half %f16.fabs, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
@@ -324,7 +324,7 @@ define void @missing_fabs(float %f32, double %f64, half %f16) #0 {
; CHECK-NEXT: store volatile i1 [[CMPF32]], ptr @var, align 1
; CHECK-NEXT: [[CMPF64:%.*]] = fcmp olt double [[F64:%.*]], 0x10000000000000
; CHECK-NEXT: store volatile i1 [[CMPF64]], ptr @var, align 1
-; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16:%.*]], 0xH0400
+; CHECK-NEXT: [[CMPF16:%.*]] = fcmp olt half [[F16:%.*]], f0x0400
; CHECK-NEXT: store volatile i1 [[CMPF16]], ptr @var, align 1
; CHECK-NEXT: ret void
;
@@ -334,7 +334,7 @@ define void @missing_fabs(float %f32, double %f64, half %f16) #0 {
%cmpf64 = fcmp olt double %f64, 0x10000000000000
store volatile i1 %cmpf64, ptr @var
- %cmpf16 = fcmp olt half %f16, 0xH0400
+ %cmpf16 = fcmp olt half %f16, f0x0400
store volatile i1 %cmpf16, ptr @var
ret void
}
diff --git a/llvm/test/Transforms/InstCombine/fcmp-special.ll b/llvm/test/Transforms/InstCombine/fcmp-special.ll
index 64bc86f4266c78..03ec91e6e8096c 100644
--- a/llvm/test/Transforms/InstCombine/fcmp-special.ll
+++ b/llvm/test/Transforms/InstCombine/fcmp-special.ll
@@ -208,7 +208,7 @@ define i1 @negative_zero_oge(double %x) {
define i1 @negative_zero_uge(half %x) {
; CHECK-LABEL: @negative_zero_uge(
-; CHECK-NEXT: [[R:%.*]] = fcmp fast uge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fcmp fast uge half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[R]]
;
%r = fcmp fast uge half %x, -0.0
diff --git a/llvm/test/Transforms/InstCombine/fcmp.ll b/llvm/test/Transforms/InstCombine/fcmp.ll
index 119cffd73c662c..f4a90947c00bda 100644
--- a/llvm/test/Transforms/InstCombine/fcmp.ll
+++ b/llvm/test/Transforms/InstCombine/fcmp.ll
@@ -32,7 +32,7 @@ define i1 @fpext_constant(float %a) {
define <2 x i1> @fpext_constant_vec_splat(<2 x half> %a) {
; CHECK-LABEL: @fpext_constant_vec_splat(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp nnan ole <2 x half> [[A:%.*]], splat (half 0xH5140)
+; CHECK-NEXT: [[CMP:%.*]] = fcmp nnan ole <2 x half> [[A:%.*]], splat (half f0x5140)
; CHECK-NEXT: ret <2 x i1> [[CMP]]
;
%ext = fpext <2 x half> %a to <2 x double>
@@ -273,7 +273,7 @@ define i1 @test7(float %x) {
; CHECK-NEXT: ret i1 [[CMP]]
;
%ext = fpext float %x to ppc_fp128
- %cmp = fcmp ogt ppc_fp128 %ext, 0xM00000000000000000000000000000000
+ %cmp = fcmp ogt ppc_fp128 %ext, f0x00000000000000000000000000000000
ret i1 %cmp
}
@@ -380,7 +380,7 @@ define <2 x i1> @fabs_ult_nnan(<2 x float> %a) {
define i1 @fabs_une(half %a) {
; CHECK-LABEL: @fabs_une(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf une half [[A:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf une half [[A:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%call = call half @llvm.fabs.f16(half %a)
@@ -760,7 +760,7 @@ define i1 @lossy_one(float %x, ptr %p) {
define i1 @lossy_ueq(half %x) {
; CHECK-LABEL: @lossy_ueq(
-; CHECK-NEXT: [[R:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[R]]
;
%e = fpext half %x to double
@@ -871,7 +871,7 @@ define i1 @lossy_ule(half %x) {
define i1 @lossy_ord(half %x) {
; CHECK-LABEL: @lossy_ord(
-; CHECK-NEXT: [[R:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[R]]
;
%e = fpext half %x to double
@@ -881,7 +881,7 @@ define i1 @lossy_ord(half %x) {
define i1 @lossy_uno(half %x) {
; CHECK-LABEL: @lossy_uno(
-; CHECK-NEXT: [[R:%.*]] = fcmp uno half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fcmp uno half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[R]]
;
%e = fpext half %x to float
@@ -901,7 +901,7 @@ define i1 @fneg_oeq(float %a) {
define i1 @fneg_ogt(half %a) {
; CHECK-LABEL: @fneg_ogt(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp fast olt half [[A:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp fast olt half [[A:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%fneg = fneg half %a
@@ -974,7 +974,7 @@ define i1 @fneg_uno(float %a) {
define i1 @fneg_ueq(half %a) {
; CHECK-LABEL: @fneg_ueq(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ueq half [[A:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ueq half [[A:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%fneg = fneg half %a
@@ -1050,7 +1050,7 @@ define i1 @fneg_oeq_swap(float %p) {
define i1 @fneg_ogt_swap(half %p) {
; CHECK-LABEL: @fneg_ogt_swap(
; CHECK-NEXT: [[A:%.*]] = fadd half [[P:%.*]], [[P]]
-; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ogt half [[A]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ogt half [[A]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%a = fadd half %p, %p ; thwart complexity-based canonicalization
@@ -1137,7 +1137,7 @@ define i1 @fneg_uno_swap(float %p) {
define i1 @fneg_ueq_swap(half %p) {
; CHECK-LABEL: @fneg_ueq_swap(
; CHECK-NEXT: [[A:%.*]] = fadd half [[P:%.*]], [[P]]
-; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ueq half [[A]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp fast ueq half [[A]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%a = fadd half %p, %p ; thwart complexity-based canonicalization
@@ -2129,7 +2129,7 @@ define i1 @fcmp_sqrt_zero_olt(half %x) {
define i1 @fcmp_sqrt_zero_ult(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ult(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2139,7 +2139,7 @@ define i1 @fcmp_sqrt_zero_ult(half %x) {
define i1 @fcmp_sqrt_zero_ult_fmf(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ult_fmf(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp nsz ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp nsz ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2149,7 +2149,7 @@ define i1 @fcmp_sqrt_zero_ult_fmf(half %x) {
define i1 @fcmp_sqrt_zero_ult_fmf_sqrt_ninf(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ult_fmf_sqrt_ninf(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf nsz ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf nsz ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call ninf half @llvm.sqrt.f16(half %x)
@@ -2159,7 +2159,7 @@ define i1 @fcmp_sqrt_zero_ult_fmf_sqrt_ninf(half %x) {
define i1 @fcmp_sqrt_zero_ult_nzero(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ult_nzero(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2189,7 +2189,7 @@ define <2 x i1> @fcmp_sqrt_zero_ult_vec_mixed_zero(<2 x half> %x) {
define i1 @fcmp_sqrt_zero_ole(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ole(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2199,7 +2199,7 @@ define i1 @fcmp_sqrt_zero_ole(half %x) {
define i1 @fcmp_sqrt_zero_ule(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ule(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ule half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ule half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2209,7 +2209,7 @@ define i1 @fcmp_sqrt_zero_ule(half %x) {
define i1 @fcmp_sqrt_zero_ogt(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ogt(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2219,7 +2219,7 @@ define i1 @fcmp_sqrt_zero_ogt(half %x) {
define i1 @fcmp_sqrt_zero_ugt(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ugt(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2229,7 +2229,7 @@ define i1 @fcmp_sqrt_zero_ugt(half %x) {
define i1 @fcmp_sqrt_zero_oge(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_oge(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oge half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2248,7 +2248,7 @@ define i1 @fcmp_sqrt_zero_uge(half %x) {
define i1 @fcmp_sqrt_zero_oeq(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_oeq(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oeq half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2258,7 +2258,7 @@ define i1 @fcmp_sqrt_zero_oeq(half %x) {
define i1 @fcmp_sqrt_zero_ueq(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ueq(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ule half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ule half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2268,7 +2268,7 @@ define i1 @fcmp_sqrt_zero_ueq(half %x) {
define i1 @fcmp_sqrt_zero_one(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_one(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2278,7 +2278,7 @@ define i1 @fcmp_sqrt_zero_one(half %x) {
define i1 @fcmp_sqrt_zero_une(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_une(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp une half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2288,7 +2288,7 @@ define i1 @fcmp_sqrt_zero_une(half %x) {
define i1 @fcmp_sqrt_zero_ord(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ord(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp oge half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp oge half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2298,7 +2298,7 @@ define i1 @fcmp_sqrt_zero_ord(half %x) {
define i1 @fcmp_sqrt_zero_uno(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_uno(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2309,7 +2309,7 @@ define i1 @fcmp_sqrt_zero_uno(half %x) {
; Make sure that ninf is cleared.
define i1 @fcmp_sqrt_zero_uno_fmf(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_uno_fmf(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
@@ -2319,7 +2319,7 @@ define i1 @fcmp_sqrt_zero_uno_fmf(half %x) {
define i1 @fcmp_sqrt_zero_uno_fmf_sqrt_ninf(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_uno_fmf_sqrt_ninf(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf ult half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ninf ult half [[X:%.*]], f0x0000
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call ninf half @llvm.sqrt.f16(half %x)
@@ -2343,7 +2343,7 @@ define i1 @fcmp_sqrt_zero_ult_var(half %x, half %y) {
define i1 @fcmp_sqrt_zero_ult_nonzero(half %x) {
; CHECK-LABEL: @fcmp_sqrt_zero_ult_nonzero(
; CHECK-NEXT: [[SQRT:%.*]] = call half @llvm.sqrt.f16(half [[X:%.*]])
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[SQRT]], 0xH3C00
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ult half [[SQRT]], f0x3C00
; CHECK-NEXT: ret i1 [[CMP]]
;
%sqrt = call half @llvm.sqrt.f16(half %x)
diff --git a/llvm/test/Transforms/InstCombine/fdiv-cos-sin.ll b/llvm/test/Transforms/InstCombine/fdiv-cos-sin.ll
index 6d945ede3b387c..5dfaf4695baa2c 100644
--- a/llvm/test/Transforms/InstCombine/fdiv-cos-sin.ll
+++ b/llvm/test/Transforms/InstCombine/fdiv-cos-sin.ll
@@ -106,7 +106,7 @@ define float @fdiv_cosf_sinf_reassoc(float %a) {
define fp128 @fdiv_cosfp128_sinfp128_reassoc(fp128 %a) {
; CHECK-LABEL: @fdiv_cosfp128_sinfp128_reassoc(
; CHECK-NEXT: [[TANL:%.*]] = call reassoc fp128 @tanl(fp128 [[A:%.*]]) #[[ATTR1]]
-; CHECK-NEXT: [[DIV:%.*]] = fdiv reassoc fp128 0xL00000000000000003FFF000000000000, [[TANL]]
+; CHECK-NEXT: [[DIV:%.*]] = fdiv reassoc fp128 f0x3FFF0000000000000000000000000000, [[TANL]]
; CHECK-NEXT: ret fp128 [[DIV]]
;
%1 = call reassoc fp128 @llvm.cos.fp128(fp128 %a)
diff --git a/llvm/test/Transforms/InstCombine/fma.ll b/llvm/test/Transforms/InstCombine/fma.ll
index ae0067d41426cf..b32bad377906d7 100644
--- a/llvm/test/Transforms/InstCombine/fma.ll
+++ b/llvm/test/Transforms/InstCombine/fma.ll
@@ -904,7 +904,7 @@ define <2 x half> @fma_negone_vec(<2 x half> %x, <2 x half> %y) {
define <2 x half> @fma_negone_vec_partial_undef(<2 x half> %x, <2 x half> %y) {
; CHECK-LABEL: @fma_negone_vec_partial_undef(
-; CHECK-NEXT: [[SUB:%.*]] = call <2 x half> @llvm.fma.v2f16(<2 x half> [[X:%.*]], <2 x half> <half undef, half 0xHBC00>, <2 x half> [[Y:%.*]])
+; CHECK-NEXT: [[SUB:%.*]] = call <2 x half> @llvm.fma.v2f16(<2 x half> [[X:%.*]], <2 x half> <half undef, half f0xBC00>, <2 x half> [[Y:%.*]])
; CHECK-NEXT: ret <2 x half> [[SUB]]
;
%sub = call <2 x half> @llvm.fma.v2f16(<2 x half> %x, <2 x half> <half undef, half -1.0>, <2 x half> %y)
@@ -915,7 +915,7 @@ define <2 x half> @fma_negone_vec_partial_undef(<2 x half> %x, <2 x half> %y) {
define half @fma_non_negone(half %x, half %y) {
; CHECK-LABEL: @fma_non_negone(
-; CHECK-NEXT: [[SUB:%.*]] = call half @llvm.fma.f16(half [[X:%.*]], half 0xHBE00, half [[Y:%.*]])
+; CHECK-NEXT: [[SUB:%.*]] = call half @llvm.fma.f16(half [[X:%.*]], half f0xBE00, half [[Y:%.*]])
; CHECK-NEXT: ret half [[SUB]]
;
%sub = call half @llvm.fma.f16(half %x, half -1.5, half %y)
diff --git a/llvm/test/Transforms/InstCombine/fmul.ll b/llvm/test/Transforms/InstCombine/fmul.ll
index cd4a8e36c6e239..771ab980f0739e 100644
--- a/llvm/test/Transforms/InstCombine/fmul.ll
+++ b/llvm/test/Transforms/InstCombine/fmul.ll
@@ -1278,7 +1278,7 @@ define <vscale x 2 x float> @mul_scalable_splat_zero(<vscale x 2 x float> %z) {
define half @mul_zero_nnan(half %x) {
; CHECK-LABEL: @mul_zero_nnan(
-; CHECK-NEXT: [[R:%.*]] = call nnan half @llvm.copysign.f16(half 0xH0000, half [[X:%.*]])
+; CHECK-NEXT: [[R:%.*]] = call nnan half @llvm.copysign.f16(half f0x0000, half [[X:%.*]])
; CHECK-NEXT: ret half [[R]]
;
%r = fmul nnan half %x, 0.0
@@ -1300,7 +1300,7 @@ define <2 x float> @mul_zero_nnan_vec_poison(<2 x float> %x) {
define half @mul_zero(half %x) {
; CHECK-LABEL: @mul_zero(
-; CHECK-NEXT: [[R:%.*]] = fmul ninf nsz half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fmul ninf nsz half [[X:%.*]], f0x0000
; CHECK-NEXT: ret half [[R]]
;
%r = fmul ninf nsz half %x, 0.0
@@ -1310,7 +1310,7 @@ define half @mul_zero(half %x) {
define half @mul_negzero_nnan(half %x) {
; CHECK-LABEL: @mul_negzero_nnan(
; CHECK-NEXT: [[TMP1:%.*]] = fneg nnan half [[X:%.*]]
-; CHECK-NEXT: [[R:%.*]] = call nnan half @llvm.copysign.f16(half 0xH0000, half [[TMP1]])
+; CHECK-NEXT: [[R:%.*]] = call nnan half @llvm.copysign.f16(half f0x0000, half [[TMP1]])
; CHECK-NEXT: ret half [[R]]
;
%r = fmul nnan half %x, -0.0
diff --git a/llvm/test/Transforms/InstCombine/fpclass-from-dom-cond.ll b/llvm/test/Transforms/InstCombine/fpclass-from-dom-cond.ll
index 78329faf341727..ed4e69b5d0e54b 100644
--- a/llvm/test/Transforms/InstCombine/fpclass-from-dom-cond.ll
+++ b/llvm/test/Transforms/InstCombine/fpclass-from-dom-cond.ll
@@ -461,7 +461,7 @@ define i1 @pr118257(half %v0, half %v1) {
; CHECK-LABEL: define i1 @pr118257(
; CHECK-SAME: half [[V0:%.*]], half [[V1:%.*]]) {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[CMP1:%.*]] = fcmp une half [[V1]], 0xH0000
+; CHECK-NEXT: [[CMP1:%.*]] = fcmp une half [[V1]], f0x0000
; CHECK-NEXT: [[CAST0:%.*]] = bitcast half [[V0]] to i16
; CHECK-NEXT: [[CMP2:%.*]] = icmp slt i16 [[CAST0]], 0
; CHECK-NEXT: [[OR_COND:%.*]] = or i1 [[CMP1]], [[CMP2]]
@@ -493,7 +493,7 @@ define i1 @pr118257_is_fpclass(half %v0, half %v1) {
; CHECK-LABEL: define i1 @pr118257_is_fpclass(
; CHECK-SAME: half [[V0:%.*]], half [[V1:%.*]]) {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[CMP1:%.*]] = fcmp une half [[V1]], 0xH0000
+; CHECK-NEXT: [[CMP1:%.*]] = fcmp une half [[V1]], f0x0000
; CHECK-NEXT: [[CMP2:%.*]] = call i1 @llvm.is.fpclass.f16(half [[V0]], i32 35)
; CHECK-NEXT: [[OR_COND:%.*]] = or i1 [[CMP1]], [[CMP2]]
; CHECK-NEXT: br i1 [[OR_COND]], label [[IF_END:%.*]], label [[IF_ELSE:%.*]]
diff --git a/llvm/test/Transforms/InstCombine/fpextend.ll b/llvm/test/Transforms/InstCombine/fpextend.ll
index c9adbe10d8db44..88b5ecf59d93d3 100644
--- a/llvm/test/Transforms/InstCombine/fpextend.ll
+++ b/llvm/test/Transforms/InstCombine/fpextend.ll
@@ -440,7 +440,7 @@ define half @bf16_to_f32_to_f16(bfloat %a) nounwind {
define bfloat @bf16_frem(bfloat %x) {
; CHECK-LABEL: @bf16_frem(
-; CHECK-NEXT: [[TMP1:%.*]] = frem bfloat [[X:%.*]], 0xR40C9
+; CHECK-NEXT: [[TMP1:%.*]] = frem bfloat [[X:%.*]], f0x40C9
; CHECK-NEXT: ret bfloat [[TMP1]]
;
%t1 = fpext bfloat %x to float
diff --git a/llvm/test/Transforms/InstCombine/fptrunc.ll b/llvm/test/Transforms/InstCombine/fptrunc.ll
index 0b5d8b3cd06e07..ed19924e274407 100644
--- a/llvm/test/Transforms/InstCombine/fptrunc.ll
+++ b/llvm/test/Transforms/InstCombine/fptrunc.ll
@@ -116,8 +116,8 @@ define half @fptrunc_select_true_val_extra_use(half %x, float %y, i1 %cond) {
define half @fptrunc_max(half %arg) {
; CHECK-LABEL: @fptrunc_max(
-; CHECK-NEXT: [[CMP:%.*]] = fcmp olt half [[ARG:%.*]], 0xH0000
-; CHECK-NEXT: [[NARROW_SEL:%.*]] = select i1 [[CMP]], half 0xH0000, half [[ARG]]
+; CHECK-NEXT: [[CMP:%.*]] = fcmp olt half [[ARG:%.*]], f0x0000
+; CHECK-NEXT: [[NARROW_SEL:%.*]] = select i1 [[CMP]], half f0x0000, half [[ARG]]
; CHECK-NEXT: ret half [[NARROW_SEL]]
;
%ext = fpext half %arg to double
diff --git a/llvm/test/Transforms/InstCombine/fsub.ll b/llvm/test/Transforms/InstCombine/fsub.ll
index cffc63405ddcbc..a96ca638349260 100644
--- a/llvm/test/Transforms/InstCombine/fsub.ll
+++ b/llvm/test/Transforms/InstCombine/fsub.ll
@@ -920,7 +920,7 @@ define float @fmul_c1_use(float %x, float %y) {
define half @fdiv_c0(half %x, half %y) {
; CHECK-LABEL: @fdiv_c0(
-; CHECK-NEXT: [[M:%.*]] = fdiv half 0xH4700, [[X:%.*]]
+; CHECK-NEXT: [[M:%.*]] = fdiv half f0x4700, [[X:%.*]]
; CHECK-NEXT: [[R:%.*]] = fsub half [[Y:%.*]], [[M]]
; CHECK-NEXT: ret half [[R]]
;
diff --git a/llvm/test/Transforms/InstCombine/log-to-intrinsic.ll b/llvm/test/Transforms/InstCombine/log-to-intrinsic.ll
index 273d44c0919199..ba8d7bda8964e1 100644
--- a/llvm/test/Transforms/InstCombine/log-to-intrinsic.ll
+++ b/llvm/test/Transforms/InstCombine/log-to-intrinsic.ll
@@ -53,16 +53,16 @@ define fp128 @test_logl_pos(fp128 %f) {
; CHECK-LABEL: define fp128 @test_logl_pos(
; CHECK-SAME: fp128 [[F:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], f0x00000000000000000000000000000000
; CHECK-NEXT: br i1 [[ISINF]], label %[[IF_END:.*]], label %[[RETURN:.*]]
; CHECK: [[IF_END]]:
; CHECK-NEXT: [[CALL:%.*]] = tail call fp128 @llvm.log.f128(fp128 [[F]])
; CHECK-NEXT: ret fp128 [[CALL]]
; CHECK: [[RETURN]]:
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
entry:
- %isinf = fcmp ole fp128 %f, 0xL00000000000000000000000000000000
+ %isinf = fcmp ole fp128 %f, f0x00000000000000000000000000000000
br i1 %isinf, label %return, label %if.end
if.end:
@@ -70,7 +70,7 @@ if.end:
ret fp128 %call
return:
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
define float @test_log10f_pos(float %f) {
@@ -125,16 +125,16 @@ define fp128 @test_log10l_pos(fp128 %f) {
; CHECK-LABEL: define fp128 @test_log10l_pos(
; CHECK-SAME: fp128 [[F:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], f0x00000000000000000000000000000000
; CHECK-NEXT: br i1 [[ISINF]], label %[[IF_END:.*]], label %[[RETURN:.*]]
; CHECK: [[IF_END]]:
; CHECK-NEXT: [[CALL:%.*]] = tail call fp128 @llvm.log10.f128(fp128 [[F]])
; CHECK-NEXT: ret fp128 [[CALL]]
; CHECK: [[RETURN]]:
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
entry:
- %isinf = fcmp ole fp128 %f, 0xL00000000000000000000000000000000
+ %isinf = fcmp ole fp128 %f, f0x00000000000000000000000000000000
br i1 %isinf, label %return, label %if.end
if.end:
@@ -142,7 +142,7 @@ if.end:
ret fp128 %call
return:
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
define float @test_log2f_pos(float %f) {
@@ -197,16 +197,16 @@ define fp128 @test_log2l_pos(fp128 %f) {
; CHECK-LABEL: define fp128 @test_log2l_pos(
; CHECK-SAME: fp128 [[F:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[ISINF:%.*]] = fcmp ugt fp128 [[F]], f0x00000000000000000000000000000000
; CHECK-NEXT: br i1 [[ISINF]], label %[[IF_END:.*]], label %[[RETURN:.*]]
; CHECK: [[IF_END]]:
; CHECK-NEXT: [[CALL:%.*]] = tail call fp128 @llvm.log2.f128(fp128 [[F]])
; CHECK-NEXT: ret fp128 [[CALL]]
; CHECK: [[RETURN]]:
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
entry:
- %isinf = fcmp ole fp128 %f, 0xL00000000000000000000000000000000
+ %isinf = fcmp ole fp128 %f, f0x00000000000000000000000000000000
br i1 %isinf, label %return, label %if.end
if.end:
@@ -214,7 +214,7 @@ if.end:
ret fp128 %call
return:
- ret fp128 0xL00000000000000000000000000000000
+ ret fp128 f0x00000000000000000000000000000000
}
diff --git a/llvm/test/Transforms/InstCombine/nanl-fp128.ll b/llvm/test/Transforms/InstCombine/nanl-fp128.ll
index 21ba0fb14ca209..20d39771016a7e 100644
--- a/llvm/test/Transforms/InstCombine/nanl-fp128.ll
+++ b/llvm/test/Transforms/InstCombine/nanl-fp128.ll
@@ -7,7 +7,7 @@
define fp128 @nanl_empty() {
; CHECK-LABEL: define fp128 @nanl_empty() {
-; CHECK-NEXT: ret fp128 0xL00000000000000007FFF800000000000
+; CHECK-NEXT: ret fp128 f0x7FFF8000000000000000000000000000
;
%res = call fp128 @nanl(ptr @empty)
ret fp128 %res
@@ -15,7 +15,7 @@ define fp128 @nanl_empty() {
define fp128 @nanl_dec() {
; CHECK-LABEL: define fp128 @nanl_dec() {
-; CHECK-NEXT: ret fp128 0xL00000000000000017FFF800000000000
+; CHECK-NEXT: ret fp128 f0x7FFF8000000000000000000000000001
;
%res = call fp128 @nanl(ptr @dec)
ret fp128 %res
@@ -23,7 +23,7 @@ define fp128 @nanl_dec() {
define fp128 @nanl_hex() {
; CHECK-LABEL: define fp128 @nanl_hex() {
-; CHECK-NEXT: ret fp128 0xL000000000000000F7FFF800000000000
+; CHECK-NEXT: ret fp128 f0x7FFF800000000000000000000000000F
;
%res = call fp128 @nanl(ptr @hex)
ret fp128 %res
diff --git a/llvm/test/Transforms/InstCombine/nanl-fp80.ll b/llvm/test/Transforms/InstCombine/nanl-fp80.ll
index 7868af3696a560..4bbef8ad6485ee 100644
--- a/llvm/test/Transforms/InstCombine/nanl-fp80.ll
+++ b/llvm/test/Transforms/InstCombine/nanl-fp80.ll
@@ -7,7 +7,7 @@
define x86_fp80 @nanl_empty() {
; CHECK-LABEL: define x86_fp80 @nanl_empty() {
-; CHECK-NEXT: ret x86_fp80 0xK7FFFC000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x7FFFC000000000000000
;
%res = call x86_fp80 @nanl(ptr @empty)
ret x86_fp80 %res
@@ -15,7 +15,7 @@ define x86_fp80 @nanl_empty() {
define x86_fp80 @nanl_dec() {
; CHECK-LABEL: define x86_fp80 @nanl_dec() {
-; CHECK-NEXT: ret x86_fp80 0xK7FFFC000000000000001
+; CHECK-NEXT: ret x86_fp80 f0x7FFFC000000000000001
;
%res = call x86_fp80 @nanl(ptr @dec)
ret x86_fp80 %res
@@ -23,7 +23,7 @@ define x86_fp80 @nanl_dec() {
define x86_fp80 @nanl_hex() {
; CHECK-LABEL: define x86_fp80 @nanl_hex() {
-; CHECK-NEXT: ret x86_fp80 0xK7FFFC00000000000000F
+; CHECK-NEXT: ret x86_fp80 f0x7FFFC00000000000000F
;
%res = call x86_fp80 @nanl(ptr @hex)
ret x86_fp80 %res
diff --git a/llvm/test/Transforms/InstCombine/nanl-ppc-fp128.ll b/llvm/test/Transforms/InstCombine/nanl-ppc-fp128.ll
index 7f60a379c48854..1b34350005fba4 100644
--- a/llvm/test/Transforms/InstCombine/nanl-ppc-fp128.ll
+++ b/llvm/test/Transforms/InstCombine/nanl-ppc-fp128.ll
@@ -7,7 +7,7 @@
define ppc_fp128 @nanl_empty() {
; CHECK-LABEL: define ppc_fp128 @nanl_empty() {
-; CHECK-NEXT: ret ppc_fp128 0xM7FF80000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000007FF8000000000000
;
%res = call ppc_fp128 @nanl(ptr @empty)
ret ppc_fp128 %res
@@ -15,7 +15,7 @@ define ppc_fp128 @nanl_empty() {
define ppc_fp128 @nanl_dec() {
; CHECK-LABEL: define ppc_fp128 @nanl_dec() {
-; CHECK-NEXT: ret ppc_fp128 0xM7FF80000000000010000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000007FF8000000000001
;
%res = call ppc_fp128 @nanl(ptr @dec)
ret ppc_fp128 %res
@@ -23,7 +23,7 @@ define ppc_fp128 @nanl_dec() {
define ppc_fp128 @nanl_hex() {
; CHECK-LABEL: define ppc_fp128 @nanl_hex() {
-; CHECK-NEXT: ret ppc_fp128 0xM7FF800000000000F0000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000007FF800000000000F
;
%res = call ppc_fp128 @nanl(ptr @hex)
ret ppc_fp128 %res
diff --git a/llvm/test/Transforms/InstCombine/pow-1.ll b/llvm/test/Transforms/InstCombine/pow-1.ll
index c23d15d16a34ba..4a82720ecc6fbd 100644
--- a/llvm/test/Transforms/InstCombine/pow-1.ll
+++ b/llvm/test/Transforms/InstCombine/pow-1.ll
@@ -1472,7 +1472,7 @@ define double @test_libcall_pow_10_f64_noerrno(double %x) {
define half @test_pow_10_f16(half %x) {
; CHECK-LABEL: define half @test_pow_10_f16(
; CHECK-SAME: half [[X:%.*]]) {
-; CHECK-NEXT: [[RETVAL:%.*]] = call half @llvm.pow.f16(half 0xH4900, half [[X]])
+; CHECK-NEXT: [[RETVAL:%.*]] = call half @llvm.pow.f16(half f0x4900, half [[X]])
; CHECK-NEXT: ret half [[RETVAL]]
;
%retval = call half @llvm.pow.f16(half 10.0, half %x)
@@ -1512,7 +1512,7 @@ define double @test_pow_10_f64(double %x) {
define fp128 @test_pow_10_fp128(fp128 %x) {
; CHECK-LABEL: define fp128 @test_pow_10_fp128(
; CHECK-SAME: fp128 [[X:%.*]]) {
-; CHECK-NEXT: [[RETVAL:%.*]] = call fp128 @llvm.pow.f128(fp128 0xL00000000000000004002400000000000, fp128 [[X]])
+; CHECK-NEXT: [[RETVAL:%.*]] = call fp128 @llvm.pow.f128(fp128 f0x40024000000000000000000000000000, fp128 [[X]])
; CHECK-NEXT: ret fp128 [[RETVAL]]
;
%ten = fpext double 10.0 to fp128
@@ -1523,7 +1523,7 @@ define fp128 @test_pow_10_fp128(fp128 %x) {
define bfloat @test_pow_10_bf16(bfloat %x) {
; CHECK-LABEL: define bfloat @test_pow_10_bf16(
; CHECK-SAME: bfloat [[X:%.*]]) {
-; CHECK-NEXT: [[RETVAL:%.*]] = call bfloat @llvm.pow.bf16(bfloat 0xR4120, bfloat [[X]])
+; CHECK-NEXT: [[RETVAL:%.*]] = call bfloat @llvm.pow.bf16(bfloat f0x4120, bfloat [[X]])
; CHECK-NEXT: ret bfloat [[RETVAL]]
;
%retval = call bfloat @llvm.pow.bf16(bfloat 10.0, bfloat %x)
@@ -1533,7 +1533,7 @@ define bfloat @test_pow_10_bf16(bfloat %x) {
define <2 x half> @test_pow_10_v2f16(<2 x half> %x) {
; CHECK-LABEL: define <2 x half> @test_pow_10_v2f16(
; CHECK-SAME: <2 x half> [[X:%.*]]) {
-; CHECK-NEXT: [[RETVAL:%.*]] = call <2 x half> @llvm.pow.v2f16(<2 x half> splat (half 0xH4900), <2 x half> [[X]])
+; CHECK-NEXT: [[RETVAL:%.*]] = call <2 x half> @llvm.pow.v2f16(<2 x half> splat (half f0x4900), <2 x half> [[X]])
; CHECK-NEXT: ret <2 x half> [[RETVAL]]
;
%retval = call <2 x half> @llvm.pow.v2f16(<2 x half> <half 10.0, half 10.0>, <2 x half> %x)
@@ -1563,7 +1563,7 @@ define <2 x double> @test_pow_10_v2f64(<2 x double> %x) {
define <2 x bfloat> @test_pow_10_v2bf16(<2 x bfloat> %x) {
; CHECK-LABEL: define <2 x bfloat> @test_pow_10_v2bf16(
; CHECK-SAME: <2 x bfloat> [[X:%.*]]) {
-; CHECK-NEXT: [[RETVAL:%.*]] = call <2 x bfloat> @llvm.pow.v2bf16(<2 x bfloat> splat (bfloat 0xR4120), <2 x bfloat> [[X]])
+; CHECK-NEXT: [[RETVAL:%.*]] = call <2 x bfloat> @llvm.pow.v2bf16(<2 x bfloat> splat (bfloat f0x4120), <2 x bfloat> [[X]])
; CHECK-NEXT: ret <2 x bfloat> [[RETVAL]]
;
%retval = call <2 x bfloat> @llvm.pow.v2bf16(<2 x bfloat> <bfloat 10.0, bfloat 10.0>, <2 x bfloat> %x)
diff --git a/llvm/test/Transforms/InstCombine/pow-exp.ll b/llvm/test/Transforms/InstCombine/pow-exp.ll
index 9d91ad2402eb1d..9e8cdf4785301f 100644
--- a/llvm/test/Transforms/InstCombine/pow-exp.ll
+++ b/llvm/test/Transforms/InstCombine/pow-exp.ll
@@ -479,10 +479,10 @@ define float @powf_ok_base_no_afn(float %e) {
define fp128 @powl_long_dbl_no_fold(fp128 %e) {
; CHECK-LABEL: @powl_long_dbl_no_fold(
-; CHECK-NEXT: [[CALL:%.*]] = tail call nnan ninf afn fp128 @powl(fp128 0xL00000000000000005001000000000000, fp128 [[E:%.*]])
+; CHECK-NEXT: [[CALL:%.*]] = tail call nnan ninf afn fp128 @powl(fp128 f0x50010000000000000000000000000000, fp128 [[E:%.*]])
; CHECK-NEXT: ret fp128 [[CALL]]
;
- %call = tail call afn nnan ninf fp128 @powl(fp128 0xL00000000000000005001000000000000, fp128 %e)
+ %call = tail call afn nnan ninf fp128 @powl(fp128 f0x50010000000000000000000000000000, fp128 %e)
ret fp128 %call
}
diff --git a/llvm/test/Transforms/InstCombine/pow-to-ldexp.ll b/llvm/test/Transforms/InstCombine/pow-to-ldexp.ll
index b8d405eac14d51..d3599f9c356417 100644
--- a/llvm/test/Transforms/InstCombine/pow-to-ldexp.ll
+++ b/llvm/test/Transforms/InstCombine/pow-to-ldexp.ll
@@ -116,7 +116,7 @@ define double @pow_sitofp_f64_const_base_2(i32 %x) {
define half @pow_sitofp_f16_const_base_2(i32 %x) {
; CHECK-LABEL: define half @pow_sitofp_f16_const_base_2(
; CHECK-SAME: i32 [[X:%.*]]) {
-; CHECK-NEXT: [[POW:%.*]] = tail call half @llvm.ldexp.f16.i32(half 0xH3C00, i32 [[X]])
+; CHECK-NEXT: [[POW:%.*]] = tail call half @llvm.ldexp.f16.i32(half f0x3C00, i32 [[X]])
; CHECK-NEXT: ret half [[POW]]
;
%itofp = sitofp i32 %x to half
@@ -198,7 +198,7 @@ define <vscale x 4 x float> @pow_sitofp_nxv4f32_const_base_2(<vscale x 4 x i32>
define <2 x half> @pow_sitofp_v2f16_const_base_2(<2 x i32> %x) {
; CHECK-LABEL: define <2 x half> @pow_sitofp_v2f16_const_base_2(
; CHECK-SAME: <2 x i32> [[X:%.*]]) {
-; CHECK-NEXT: [[EXP2:%.*]] = tail call <2 x half> @llvm.ldexp.v2f16.v2i32(<2 x half> splat (half 0xH3C00), <2 x i32> [[X]])
+; CHECK-NEXT: [[EXP2:%.*]] = tail call <2 x half> @llvm.ldexp.v2f16.v2i32(<2 x half> splat (half f0x3C00), <2 x i32> [[X]])
; CHECK-NEXT: ret <2 x half> [[EXP2]]
;
%itofp = sitofp <2 x i32> %x to <2 x half>
@@ -221,27 +221,27 @@ define <2 x half> @pow_sitofp_v2f16_const_base_8(<2 x i32> %x) {
; EXP2-LABEL: define <2 x half> @pow_sitofp_v2f16_const_base_8(
; EXP2-SAME: <2 x i32> [[X:%.*]]) {
; EXP2-NEXT: [[ITOFP:%.*]] = sitofp <2 x i32> [[X]] to <2 x half>
-; EXP2-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], <half 0xH4200, half 0xH4200>
+; EXP2-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], <half f0x4200, half f0x4200>
; EXP2-NEXT: [[EXP2:%.*]] = tail call <2 x half> @llvm.exp2.v2f16(<2 x half> [[MUL]])
; EXP2-NEXT: ret <2 x half> [[EXP2]]
;
; LDEXP-EXP2-LABEL: define <2 x half> @pow_sitofp_v2f16_const_base_8(
; LDEXP-EXP2-SAME: <2 x i32> [[X:%.*]]) {
; LDEXP-EXP2-NEXT: [[ITOFP:%.*]] = sitofp <2 x i32> [[X]] to <2 x half>
-; LDEXP-EXP2-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], splat (half 0xH4200)
+; LDEXP-EXP2-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], splat (half f0x4200)
; LDEXP-EXP2-NEXT: [[EXP2:%.*]] = tail call <2 x half> @llvm.exp2.v2f16(<2 x half> [[MUL]])
; LDEXP-EXP2-NEXT: ret <2 x half> [[EXP2]]
;
; LDEXP-NOEXP2-LABEL: define <2 x half> @pow_sitofp_v2f16_const_base_8(
; LDEXP-NOEXP2-SAME: <2 x i32> [[X:%.*]]) {
; LDEXP-NOEXP2-NEXT: [[ITOFP:%.*]] = sitofp <2 x i32> [[X]] to <2 x half>
-; LDEXP-NOEXP2-NEXT: [[POW:%.*]] = tail call <2 x half> @llvm.pow.v2f16(<2 x half> splat (half 0xH4800), <2 x half> [[ITOFP]])
+; LDEXP-NOEXP2-NEXT: [[POW:%.*]] = tail call <2 x half> @llvm.pow.v2f16(<2 x half> splat (half f0x4800), <2 x half> [[ITOFP]])
; LDEXP-NOEXP2-NEXT: ret <2 x half> [[POW]]
;
; NOLDEXP-LABEL: define <2 x half> @pow_sitofp_v2f16_const_base_8(
; NOLDEXP-SAME: <2 x i32> [[X:%.*]]) {
; NOLDEXP-NEXT: [[ITOFP:%.*]] = sitofp <2 x i32> [[X]] to <2 x half>
-; NOLDEXP-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], splat (half 0xH4200)
+; NOLDEXP-NEXT: [[MUL:%.*]] = fmul <2 x half> [[ITOFP]], splat (half f0x4200)
; NOLDEXP-NEXT: [[EXP2:%.*]] = tail call <2 x half> @llvm.exp2.v2f16(<2 x half> [[MUL]])
; NOLDEXP-NEXT: ret <2 x half> [[EXP2]]
;
@@ -286,11 +286,11 @@ define <2 x double> @pow_sitofp_v2f64_const_base_8(<2 x i32> %x) {
define fp128 @pow_sitofp_fp128_const_base_2(i32 %x) {
; CHECK-LABEL: define fp128 @pow_sitofp_fp128_const_base_2(
; CHECK-SAME: i32 [[X:%.*]]) {
-; CHECK-NEXT: [[EXP2:%.*]] = tail call fp128 @llvm.ldexp.f128.i32(fp128 0xL00000000000000003FFF000000000000, i32 [[X]])
+; CHECK-NEXT: [[EXP2:%.*]] = tail call fp128 @llvm.ldexp.f128.i32(fp128 f0x3FFF0000000000000000000000000000, i32 [[X]])
; CHECK-NEXT: ret fp128 [[EXP2]]
;
%itofp = sitofp i32 %x to fp128
- %pow = tail call fp128 @llvm.pow.fp128(fp128 0xL00000000000000004000000000000000, fp128 %itofp)
+ %pow = tail call fp128 @llvm.pow.fp128(fp128 f0x40000000000000000000000000000000, fp128 %itofp)
ret fp128 %pow
}
@@ -381,10 +381,10 @@ define double @readnone_libcall_pow_sitofp_f32_const_base_2(i32 %x) {
define fp128 @readnone_libcall_powl_sitofp_fp128_const_base_2(i32 %x) {
; CHECK-LABEL: define fp128 @readnone_libcall_powl_sitofp_fp128_const_base_2(
; CHECK-SAME: i32 [[X:%.*]]) {
-; CHECK-NEXT: [[EXP2:%.*]] = tail call fp128 @llvm.ldexp.f128.i32(fp128 0xL00000000000000003FFF000000000000, i32 [[X]])
+; CHECK-NEXT: [[EXP2:%.*]] = tail call fp128 @llvm.ldexp.f128.i32(fp128 f0x3FFF0000000000000000000000000000, i32 [[X]])
; CHECK-NEXT: ret fp128 [[EXP2]]
;
%itofp = sitofp i32 %x to fp128
- %pow = tail call fp128 @powl(fp128 0xL00000000000000004000000000000000, fp128 %itofp) memory(none)
+ %pow = tail call fp128 @powl(fp128 f0x40000000000000000000000000000000, fp128 %itofp) memory(none)
ret fp128 %pow
}
diff --git a/llvm/test/Transforms/InstCombine/remquol-fp128.ll b/llvm/test/Transforms/InstCombine/remquol-fp128.ll
index 38e0a6040a1409..5bf51bcf5d89d6 100644
--- a/llvm/test/Transforms/InstCombine/remquol-fp128.ll
+++ b/llvm/test/Transforms/InstCombine/remquol-fp128.ll
@@ -6,10 +6,10 @@ define fp128 @remquo_fp128(ptr %quo) {
; CHECK-SAME: ptr [[QUO:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: store i32 -2, ptr [[QUO]], align 4
-; CHECK-NEXT: ret fp128 0xL00000000000000003FFF000000000000
+; CHECK-NEXT: ret fp128 f0x3FFF0000000000000000000000000000
;
entry:
- %call = call fp128 @remquol(fp128 0xL0000000000000000C001400000000000, fp128 0xL00000000000000004000800000000000, ptr %quo)
+ %call = call fp128 @remquol(fp128 f0xC0014000000000000000000000000000, fp128 f0x40008000000000000000000000000000, ptr %quo)
ret fp128 %call
}
diff --git a/llvm/test/Transforms/InstCombine/remquol-fp80.ll b/llvm/test/Transforms/InstCombine/remquol-fp80.ll
index fe65ee1acc9025..772bca9e7beafe 100644
--- a/llvm/test/Transforms/InstCombine/remquol-fp80.ll
+++ b/llvm/test/Transforms/InstCombine/remquol-fp80.ll
@@ -6,10 +6,10 @@ define x86_fp80 @remquo_fp80(ptr %quo) {
; CHECK-SAME: ptr [[QUO:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: store i32 -2, ptr [[QUO]], align 4
-; CHECK-NEXT: ret x86_fp80 0xK3FFF8000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x3FFF8000000000000000
;
entry:
- %call = call x86_fp80 @remquol(x86_fp80 0xKC001A000000000000000, x86_fp80 0xK4000C000000000000000, ptr %quo)
+ %call = call x86_fp80 @remquol(x86_fp80 f0xC001A000000000000000, x86_fp80 f0x4000C000000000000000, ptr %quo)
ret x86_fp80 %call
}
diff --git a/llvm/test/Transforms/InstCombine/remquol-ppc-fp128.ll b/llvm/test/Transforms/InstCombine/remquol-ppc-fp128.ll
index 86dfd01f859aca..4a19f03424f4ed 100644
--- a/llvm/test/Transforms/InstCombine/remquol-ppc-fp128.ll
+++ b/llvm/test/Transforms/InstCombine/remquol-ppc-fp128.ll
@@ -6,10 +6,10 @@ define ppc_fp128 @remquo_ppc_fp128(ptr %quo) {
; CHECK-SAME: ptr [[QUO:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: store i32 -2, ptr [[QUO]], align 4
-; CHECK-NEXT: ret ppc_fp128 0xM3FF00000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000003FF0000000000000
;
entry:
- %call = call ppc_fp128 @remquol(ppc_fp128 0xMC0140000000000000000000000000000, ppc_fp128 0xM40080000000000000000000000000000, ptr %quo)
+ %call = call ppc_fp128 @remquol(ppc_fp128 f0x0000000000000000C014000000000000, ppc_fp128 f0x00000000000000004008000000000000, ptr %quo)
ret ppc_fp128 %call
}
diff --git a/llvm/test/Transforms/InstCombine/select-with-extreme-eq-cond.ll b/llvm/test/Transforms/InstCombine/select-with-extreme-eq-cond.ll
index 7f2cca44eab3b7..a1850344273632 100644
--- a/llvm/test/Transforms/InstCombine/select-with-extreme-eq-cond.ll
+++ b/llvm/test/Transforms/InstCombine/select-with-extreme-eq-cond.ll
@@ -267,8 +267,8 @@ define i1 @compare_float_negative(half %x, half %y) {
; CHECK-LABEL: define i1 @compare_float_negative(
; CHECK-SAME: half [[X:%.*]], half [[Y:%.*]]) {
; CHECK-NEXT: [[START:.*:]]
-; CHECK-NEXT: [[TMP2:%.*]] = fcmp oeq half [[X]], 0xH0000
-; CHECK-NEXT: [[TMP3:%.*]] = fcmp one half [[Y]], 0xH0000
+; CHECK-NEXT: [[TMP2:%.*]] = fcmp oeq half [[X]], f0x0000
+; CHECK-NEXT: [[TMP3:%.*]] = fcmp one half [[Y]], f0x0000
; CHECK-NEXT: [[TMP4:%.*]] = fcmp ult half [[X]], [[Y]]
; CHECK-NEXT: [[RESULT:%.*]] = select i1 [[TMP2]], i1 [[TMP3]], i1 [[TMP4]]
; CHECK-NEXT: ret i1 [[RESULT]]
diff --git a/llvm/test/Transforms/InstCombine/unordered-compare-and-ordered.ll b/llvm/test/Transforms/InstCombine/unordered-compare-and-ordered.ll
index ec015e8ad2aaa0..9fad7ebbe7630a 100644
--- a/llvm/test/Transforms/InstCombine/unordered-compare-and-ordered.ll
+++ b/llvm/test/Transforms/InstCombine/unordered-compare-and-ordered.ll
@@ -3,7 +3,7 @@
define i1 @fcmp_ord_and_uno(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_uno(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UNO:%.*]] = fcmp uno half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UNO]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -16,7 +16,7 @@ define i1 @fcmp_ord_and_uno(half %x, half %y) {
define i1 @fcmp_ord_and_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -29,7 +29,7 @@ define i1 @fcmp_ord_and_ueq(half %x, half %y) {
define i1 @fcmp_ord_and_ugt(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ugt(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UGT:%.*]] = fcmp ugt half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UGT]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -42,7 +42,7 @@ define i1 @fcmp_ord_and_ugt(half %x, half %y) {
define i1 @fcmp_ord_and_uge(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_uge(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UGE:%.*]] = fcmp uge half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UGE]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -55,7 +55,7 @@ define i1 @fcmp_ord_and_uge(half %x, half %y) {
define i1 @fcmp_ord_and_ult(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ult(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[ULT:%.*]] = fcmp ult half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[ULT]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -68,7 +68,7 @@ define i1 @fcmp_ord_and_ult(half %x, half %y) {
define i1 @fcmp_ord_and_ule(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ule(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[ULE:%.*]] = fcmp ule half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[ULE]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -81,7 +81,7 @@ define i1 @fcmp_ord_and_ule(half %x, half %y) {
define i1 @fcmp_ord_and_une(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_une(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UNE:%.*]] = fcmp une half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UNE]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -119,7 +119,7 @@ define <2 x i1> @fcmp_ord_and_ueq_vector(<2 x half> %x, <2 x half> %y) {
; Negative test
define i1 @fcmp_ord_and_ueq_different_value0(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_ueq_different_value0(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Z:%.*]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -133,7 +133,7 @@ define i1 @fcmp_ord_and_ueq_different_value0(half %x, half %y, half %z) {
; Negative test
define i1 @fcmp_ord_and_ueq_different_value1(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_ueq_different_value1(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y:%.*]], [[Z:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -150,7 +150,7 @@ define i1 @fcmp_ord_and_ueq_commute0() {
; CHECK-LABEL: @fcmp_ord_and_ueq_commute0(
; CHECK-NEXT: [[X:%.*]] = call half @foo()
; CHECK-NEXT: [[Y:%.*]] = call half @foo()
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[UEQ]], [[ORD]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -167,7 +167,7 @@ define i1 @fcmp_ord_and_ueq_commute1() {
; CHECK-LABEL: @fcmp_ord_and_ueq_commute1(
; CHECK-NEXT: [[X:%.*]] = call half @foo()
; CHECK-NEXT: [[Y:%.*]] = call half @foo()
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -182,7 +182,7 @@ define i1 @fcmp_ord_and_ueq_commute1() {
define i1 @fcmp_oeq_x_x_and_ult(half %x, half %y) {
; CHECK-LABEL: @fcmp_oeq_x_x_and_ult(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[ULT:%.*]] = fcmp ult half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[ULT]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -195,7 +195,7 @@ define i1 @fcmp_oeq_x_x_and_ult(half %x, half %y) {
define i1 @fcmp_ord_and_ueq_preserve_flags(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq_preserve_flags(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp nsz ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -208,7 +208,7 @@ define i1 @fcmp_ord_and_ueq_preserve_flags(half %x, half %y) {
define i1 @fcmp_ord_and_ueq_preserve_subset_flags0(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq_preserve_subset_flags0(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ninf nsz ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -221,7 +221,7 @@ define i1 @fcmp_ord_and_ueq_preserve_subset_flags0(half %x, half %y) {
define i1 @fcmp_ord_and_ueq_preserve_subset_flags1(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq_preserve_subset_flags1(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ninf nsz ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ninf nsz ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp nsz ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -234,7 +234,7 @@ define i1 @fcmp_ord_and_ueq_preserve_subset_flags1(half %x, half %y) {
define i1 @fcmp_ord_and_ueq_flags_lhs(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq_flags_lhs(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp nsz ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -247,7 +247,7 @@ define i1 @fcmp_ord_and_ueq_flags_lhs(half %x, half %y) {
define i1 @fcmp_ord_and_ueq_flags_rhs(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_ueq_flags_rhs(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp nsz ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -262,7 +262,7 @@ define i1 @fcmp_ord_and_ueq_flags_rhs(half %x, half %y) {
define i1 @fcmp_ord_and_fabs_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_fabs_ueq(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[FABS_X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -276,7 +276,7 @@ define i1 @fcmp_ord_and_fabs_ueq(half %x, half %y) {
define i1 @fcmp_ord_fabs_and_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_fabs_and_ueq(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -293,7 +293,7 @@ define i1 @fcmp_ord_and_fabs_ueq_commute0() {
; CHECK-NEXT: [[X:%.*]] = call half @foo()
; CHECK-NEXT: [[Y:%.*]] = call half @foo()
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y]], [[FABS_X]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -312,7 +312,7 @@ define i1 @fcmp_ord_and_fabs_ueq_commute1() {
; CHECK-NEXT: [[X:%.*]] = call half @foo()
; CHECK-NEXT: [[Y:%.*]] = call half @foo()
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y]], [[FABS_X]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[UEQ]], [[ORD]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -344,7 +344,7 @@ define <2 x i1> @fcmp_ord_and_fabs_ueq_vector(<2 x half> %x, <2 x half> %y) {
define i1 @fcmp_ord_fabs_and_fabs_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_fabs_and_fabs_ueq(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[FABS_X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -359,7 +359,7 @@ define i1 @fcmp_ord_fabs_and_fabs_ueq(half %x, half %y) {
define i1 @fcmp_ord_and_fneg_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_fneg_ueq(
; CHECK-NEXT: [[FNEG_X:%.*]] = fneg half [[X:%.*]]
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y:%.*]], [[FNEG_X]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -373,7 +373,7 @@ define i1 @fcmp_ord_and_fneg_ueq(half %x, half %y) {
define i1 @fcmp_ord_fneg_and_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_fneg_and_ueq(
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X:%.*]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -388,7 +388,7 @@ define i1 @fcmp_ord_fneg_and_ueq(half %x, half %y) {
define i1 @fcmp_ord_fneg_and_fneg_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_fneg_and_fneg_ueq(
; CHECK-NEXT: [[FNEG_X:%.*]] = fneg half [[X:%.*]]
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y:%.*]], [[FNEG_X]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -404,7 +404,7 @@ define i1 @fcmp_ord_and_fneg_fabs_ueq(half %x, half %y) {
; CHECK-LABEL: @fcmp_ord_and_fneg_fabs_ueq(
; CHECK-NEXT: [[FABS_X:%.*]] = call half @llvm.fabs.f16(half [[X:%.*]])
; CHECK-NEXT: [[FNEG_FABS_X:%.*]] = fneg half [[FABS_X]]
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y:%.*]], [[FNEG_FABS_X]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -420,7 +420,7 @@ define i1 @fcmp_ord_and_fneg_fabs_ueq(half %x, half %y) {
define i1 @fcmp_ord_and_copysign_ueq(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_copysign_ueq(
; CHECK-NEXT: [[COPYSIGN_X_Y:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[Z:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[COPYSIGN_X_Y]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -435,7 +435,7 @@ define i1 @fcmp_ord_and_copysign_ueq(half %x, half %y, half %z) {
define i1 @fcmp_copysign_ord_and_ueq(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_copysign_ord_and_ueq(
; CHECK-NEXT: [[COPYSIGN_X_Y:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[Z:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[COPYSIGN_X_Y]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[COPYSIGN_X_Y]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[X]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -450,7 +450,7 @@ define i1 @fcmp_copysign_ord_and_ueq(half %x, half %y, half %z) {
define i1 @fcmp_ord_and_copysign_ueq_commute(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_copysign_ueq_commute(
; CHECK-NEXT: [[COPYSIGN_X_Y:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[Z:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[Y:%.*]], [[COPYSIGN_X_Y]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -465,7 +465,7 @@ define i1 @fcmp_ord_and_copysign_ueq_commute(half %x, half %y, half %z) {
define i1 @fcmp_ord_and_copysign_fneg_ueq(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_copysign_fneg_ueq(
; CHECK-NEXT: [[COPYSIGN_X_Y:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[Z:%.*]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[COPYSIGN_X_Y]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
@@ -482,7 +482,7 @@ define i1 @fcmp_ord_and_fneg_copysign_ueq(half %x, half %y, half %z) {
; CHECK-LABEL: @fcmp_ord_and_fneg_copysign_ueq(
; CHECK-NEXT: [[TMP1:%.*]] = fneg half [[Z:%.*]]
; CHECK-NEXT: [[FNEG_COPYSIGN:%.*]] = call half @llvm.copysign.f16(half [[X:%.*]], half [[TMP1]])
-; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], 0xH0000
+; CHECK-NEXT: [[ORD:%.*]] = fcmp ord half [[X]], f0x0000
; CHECK-NEXT: [[UEQ:%.*]] = fcmp ueq half [[FNEG_COPYSIGN]], [[Y:%.*]]
; CHECK-NEXT: [[AND:%.*]] = and i1 [[ORD]], [[UEQ]]
; CHECK-NEXT: ret i1 [[AND]]
diff --git a/llvm/test/Transforms/InstCombine/win-fdim.ll b/llvm/test/Transforms/InstCombine/win-fdim.ll
index a2e9de77cb58d6..27f4b07c133918 100644
--- a/llvm/test/Transforms/InstCombine/win-fdim.ll
+++ b/llvm/test/Transforms/InstCombine/win-fdim.ll
@@ -22,10 +22,10 @@ define float @fdim_float() {
;fdiml is not supported by windows
define fp128 @fdim_long() {
; MSVC19-LABEL: define fp128 @fdim_long() {
-; MSVC19-NEXT: [[DIM:%.*]] = call fp128 @fdiml(fp128 0xL00000000000000000000000000000000, fp128 0xL00000000000000000000000000000000)
+; MSVC19-NEXT: [[DIM:%.*]] = call fp128 @fdiml(fp128 f0x00000000000000000000000000000000, fp128 f0x00000000000000000000000000000000)
; MSVC19-NEXT: ret fp128 [[DIM]]
;
- %dim = call fp128 @fdiml(fp128 0xL00000000000000000000000000000000 , fp128 0xL00000000000000000000000000000000)
+ %dim = call fp128 @fdiml(fp128 f0x00000000000000000000000000000000 , fp128 f0x00000000000000000000000000000000)
ret fp128 %dim
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/cos.ll b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/cos.ll
index 5368da112ab46e..6152ca5c87c051 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/cos.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/cos.ll
@@ -7,27 +7,27 @@ declare double @llvm.amdgcn.cos.f64(double) #0
define void @test_f16(ptr %p) {
; CHECK-LABEL: @test_f16(
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P:%.*]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH39A8, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH39A8, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xHBC00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xHBC00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: [[P1000:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xH63D0)
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P:%.*]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x39A8, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x39A8, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0xBC00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0xBC00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: [[P1000:%.*]] = call half @llvm.amdgcn.cos.f16(half f0x63D0)
; CHECK-NEXT: store volatile half [[P1000]], ptr [[P]], align 2
-; CHECK-NEXT: [[N1000:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xHE3D0)
+; CHECK-NEXT: [[N1000:%.*]] = call half @llvm.amdgcn.cos.f16(half f0xE3D0)
; CHECK-NEXT: store volatile half [[N1000]], ptr [[P]], align 2
-; CHECK-NEXT: [[PINF:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xH7C00)
+; CHECK-NEXT: [[PINF:%.*]] = call half @llvm.amdgcn.cos.f16(half f0x7C00)
; CHECK-NEXT: store volatile half [[PINF]], ptr [[P]], align 2
-; CHECK-NEXT: [[NINF:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xHFC00)
+; CHECK-NEXT: [[NINF:%.*]] = call half @llvm.amdgcn.cos.f16(half f0xFC00)
; CHECK-NEXT: store volatile half [[NINF]], ptr [[P]], align 2
-; CHECK-NEXT: [[NAN:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xH7E00)
+; CHECK-NEXT: [[NAN:%.*]] = call half @llvm.amdgcn.cos.f16(half f0x7E00)
; CHECK-NEXT: store volatile half [[NAN]], ptr [[P]], align 2
; CHECK-NEXT: ret void
;
@@ -59,11 +59,11 @@ define void @test_f16(ptr %p) {
store volatile half %p1000, ptr %p
%n1000 = call half @llvm.amdgcn.cos.f16(half -1000.0)
store volatile half %n1000, ptr %p
- %pinf = call half @llvm.amdgcn.cos.f16(half 0xH7C00) ; +inf
+ %pinf = call half @llvm.amdgcn.cos.f16(half f0x7C00) ; +inf
store volatile half %pinf, ptr %p
- %ninf = call half @llvm.amdgcn.cos.f16(half 0xHFC00) ; -inf
+ %ninf = call half @llvm.amdgcn.cos.f16(half f0xFC00) ; -inf
store volatile half %ninf, ptr %p
- %nan = call half @llvm.amdgcn.cos.f16(half 0xH7E00) ; nan
+ %nan = call half @llvm.amdgcn.cos.f16(half f0x7E00) ; nan
store volatile half %nan, ptr %p
ret void
}
@@ -196,9 +196,9 @@ define void @test_f64(ptr %p) {
define void @test_f16_strictfp (ptr %p) #1 {
; CHECK-LABEL: @test_f16_strictfp(
-; CHECK-NEXT: [[P0:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xH0000) #1
+; CHECK-NEXT: [[P0:%.*]] = call half @llvm.amdgcn.cos.f16(half f0x0000) #1
; CHECK-NEXT: store volatile half [[P0]], ptr [[P:%.*]], align 2
-; CHECK-NEXT: [[P025:%.*]] = call half @llvm.amdgcn.cos.f16(half 0xH3400) #1
+; CHECK-NEXT: [[P025:%.*]] = call half @llvm.amdgcn.cos.f16(half f0x3400) #1
; CHECK-NEXT: store volatile half [[P025]], ptr [[P]], align 2
; CHECK-NEXT: ret void
;
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/fract.ll b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/fract.ll
index 73fc897748f645..866f8eb642047f 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/fract.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/fract.ll
@@ -7,17 +7,17 @@ declare double @llvm.amdgcn.fract.f64(double)
define void @test_f16(ptr %p) {
; CHECK-LABEL: @test_f16(
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P:%.*]]
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH3400, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH3B00, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH0400, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH3BFF, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH7E00, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH7E00, ptr [[P]]
-; CHECK-NEXT: store volatile half 0xH7E00, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P:%.*]]
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x3400, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x3B00, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x0400, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x3BFF, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x7E00, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x7E00, ptr [[P]]
+; CHECK-NEXT: store volatile half f0x7E00, ptr [[P]]
; CHECK-NEXT: ret void
;
%p0 = call half @llvm.amdgcn.fract.f16(half +0.0)
@@ -32,15 +32,15 @@ define void @test_f16(ptr %p) {
store volatile half %p225, ptr %p
%n6125 = call half @llvm.amdgcn.fract.f16(half -6.125)
store volatile half %n6125, ptr %p
- %ptiny = call half @llvm.amdgcn.fract.f16(half 0xH0400) ; +min normal
+ %ptiny = call half @llvm.amdgcn.fract.f16(half f0x0400) ; +min normal
store volatile half %ptiny, ptr %p
- %ntiny = call half @llvm.amdgcn.fract.f16(half 0xH8400) ; -min normal
+ %ntiny = call half @llvm.amdgcn.fract.f16(half f0x8400) ; -min normal
store volatile half %ntiny, ptr %p
- %pinf = call half @llvm.amdgcn.fract.f16(half 0xH7C00) ; +inf
+ %pinf = call half @llvm.amdgcn.fract.f16(half f0x7C00) ; +inf
store volatile half %pinf, ptr %p
- %ninf = call half @llvm.amdgcn.fract.f16(half 0xHFC00) ; -inf
+ %ninf = call half @llvm.amdgcn.fract.f16(half f0xFC00) ; -inf
store volatile half %ninf, ptr %p
- %nan = call half @llvm.amdgcn.fract.f16(half 0xH7E00) ; nan
+ %nan = call half @llvm.amdgcn.fract.f16(half f0x7E00) ; nan
store volatile half %nan, ptr %p
ret void
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/sin.ll b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/sin.ll
index 6aeecfff7c0310..1fe8f924f9029b 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/sin.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/AMDGPU/sin.ll
@@ -7,27 +7,27 @@ declare double @llvm.amdgcn.sin.f64(double) #0
define void @test_f16(ptr %p) {
; CHECK-LABEL: @test_f16(
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P:%.*]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH39A8, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xHB9A8, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH3C00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xHBC00, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: store volatile half 0xH0000, ptr [[P]], align 2
-; CHECK-NEXT: [[P1000:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xH63D0)
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P:%.*]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x39A8, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0xB9A8, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x3C00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0xBC00, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: store volatile half f0x0000, ptr [[P]], align 2
+; CHECK-NEXT: [[P1000:%.*]] = call half @llvm.amdgcn.sin.f16(half f0x63D0)
; CHECK-NEXT: store volatile half [[P1000]], ptr [[P]], align 2
-; CHECK-NEXT: [[N1000:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xHE3D0)
+; CHECK-NEXT: [[N1000:%.*]] = call half @llvm.amdgcn.sin.f16(half f0xE3D0)
; CHECK-NEXT: store volatile half [[N1000]], ptr [[P]], align 2
-; CHECK-NEXT: [[PINF:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xH7C00)
+; CHECK-NEXT: [[PINF:%.*]] = call half @llvm.amdgcn.sin.f16(half f0x7C00)
; CHECK-NEXT: store volatile half [[PINF]], ptr [[P]], align 2
-; CHECK-NEXT: [[NINF:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xHFC00)
+; CHECK-NEXT: [[NINF:%.*]] = call half @llvm.amdgcn.sin.f16(half f0xFC00)
; CHECK-NEXT: store volatile half [[NINF]], ptr [[P]], align 2
-; CHECK-NEXT: [[NAN:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xH7E00)
+; CHECK-NEXT: [[NAN:%.*]] = call half @llvm.amdgcn.sin.f16(half f0x7E00)
; CHECK-NEXT: store volatile half [[NAN]], ptr [[P]], align 2
; CHECK-NEXT: ret void
;
@@ -59,11 +59,11 @@ define void @test_f16(ptr %p) {
store volatile half %p1000, ptr %p
%n1000 = call half @llvm.amdgcn.sin.f16(half -1000.0)
store volatile half %n1000, ptr %p
- %pinf = call half @llvm.amdgcn.sin.f16(half 0xH7C00) ; +inf
+ %pinf = call half @llvm.amdgcn.sin.f16(half f0x7C00) ; +inf
store volatile half %pinf, ptr %p
- %ninf = call half @llvm.amdgcn.sin.f16(half 0xHFC00) ; -inf
+ %ninf = call half @llvm.amdgcn.sin.f16(half f0xFC00) ; -inf
store volatile half %ninf, ptr %p
- %nan = call half @llvm.amdgcn.sin.f16(half 0xH7E00) ; nan
+ %nan = call half @llvm.amdgcn.sin.f16(half f0x7E00) ; nan
store volatile half %nan, ptr %p
ret void
}
@@ -196,9 +196,9 @@ define void @test_f64(ptr %p) {
define void @test_f16_strictfp (ptr %p) #1 {
; CHECK-LABEL: @test_f16_strictfp(
-; CHECK-NEXT: [[P0:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xH0000) #1
+; CHECK-NEXT: [[P0:%.*]] = call half @llvm.amdgcn.sin.f16(half f0x0000) #1
; CHECK-NEXT: store volatile half [[P0]], ptr [[P:%.*]], align 2
-; CHECK-NEXT: [[P025:%.*]] = call half @llvm.amdgcn.sin.f16(half 0xH3400) #1
+; CHECK-NEXT: [[P025:%.*]] = call half @llvm.amdgcn.sin.f16(half f0x3400) #1
; CHECK-NEXT: store volatile half [[P025]], ptr [[P]], align 2
; CHECK-NEXT: ret void
;
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/cast.ll b/llvm/test/Transforms/InstSimplify/ConstProp/cast.ll
index b51f061d7918c3..2b61db0823afac 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/cast.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/cast.ll
@@ -57,7 +57,7 @@ define float @nan_f64_trunc() {
define <3 x half> @nan_v3f64_trunc() {
; CHECK-LABEL: @nan_v3f64_trunc(
-; CHECK-NEXT: ret <3 x half> splat (half 0xH7E00)
+; CHECK-NEXT: ret <3 x half> splat (half f0x7E00)
;
%f = fptrunc <3 x double> <double 0x7FF0020000000000, double 0x7FF003FFFFFFFFFF, double 0x7FF8000000000001> to <3 x half>
ret <3 x half> %f
@@ -65,7 +65,7 @@ define <3 x half> @nan_v3f64_trunc() {
define bfloat @nan_bf16_trunc() {
; CHECK-LABEL: @nan_bf16_trunc(
-; CHECK-NEXT: ret bfloat 0xR7FC0
+; CHECK-NEXT: ret bfloat f0x7FC0
;
%f = fptrunc double 0x7FF0000000000001 to bfloat
ret bfloat %f
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/convert-from-fp16.ll b/llvm/test/Transforms/InstSimplify/ConstProp/convert-from-fp16.ll
index 2c1b5f0a6a4cbd..873c254bc66839 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/convert-from-fp16.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/convert-from-fp16.ll
@@ -22,7 +22,7 @@ define double @fold_from_fp16_to_fp64() {
define x86_fp80 @fold_from_fp16_to_fp80() {
; CHECK-LABEL: @fold_from_fp16_to_fp80(
-; CHECK-NEXT: ret x86_fp80 0xK00000000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x00000000000000000000
;
%r = call x86_fp80 @llvm.convert.from.fp16.f80(i16 0)
ret x86_fp80 %r
@@ -30,7 +30,7 @@ define x86_fp80 @fold_from_fp16_to_fp80() {
define fp128 @fold_from_fp16_to_fp128() {
; CHECK-LABEL: @fold_from_fp16_to_fp128(
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
%r = call fp128 @llvm.convert.from.fp16.f128(i16 0)
ret fp128 %r
@@ -38,7 +38,7 @@ define fp128 @fold_from_fp16_to_fp128() {
define ppc_fp128 @fold_from_fp16_to_ppcfp128() {
; CHECK-LABEL: @fold_from_fp16_to_ppcfp128(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000000000000000000000
;
%r = call ppc_fp128 @llvm.convert.from.fp16.ppcf128(i16 0)
ret ppc_fp128 %r
@@ -64,7 +64,7 @@ define double @fold_from_fp16_to_fp64_b() {
define x86_fp80 @fold_from_fp16_to_fp80_b() {
; CHECK-LABEL: @fold_from_fp16_to_fp80_b(
-; CHECK-NEXT: ret x86_fp80 0xK40018000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x40018000000000000000
;
%a = call i16 @llvm.convert.to.fp16.f64(double 4.0)
%r = call x86_fp80 @llvm.convert.from.fp16.f80(i16 %a)
@@ -73,7 +73,7 @@ define x86_fp80 @fold_from_fp16_to_fp80_b() {
define fp128 @fold_from_fp16_to_fp128_b() {
; CHECK-LABEL: @fold_from_fp16_to_fp128_b(
-; CHECK-NEXT: ret fp128 0xL00000000000000004001000000000000
+; CHECK-NEXT: ret fp128 f0x40010000000000000000000000000000
;
%a = call i16 @llvm.convert.to.fp16.f64(double 4.0)
%r = call fp128 @llvm.convert.from.fp16.f128(i16 %a)
@@ -82,7 +82,7 @@ define fp128 @fold_from_fp16_to_fp128_b() {
define ppc_fp128 @fold_from_fp16_to_ppcfp128_b() {
; CHECK-LABEL: @fold_from_fp16_to_ppcfp128_b(
-; CHECK-NEXT: ret ppc_fp128 0xM40100000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000004010000000000000
;
%a = call i16 @llvm.convert.to.fp16.f64(double 4.0)
%r = call ppc_fp128 @llvm.convert.from.fp16.ppcf128(i16 %a)
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/copysign.ll b/llvm/test/Transforms/InstSimplify/ConstProp/copysign.ll
index 051cb84fd0daff..6731c51d7a4c87 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/copysign.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/copysign.ll
@@ -57,7 +57,7 @@ define double @f64_03() {
define bfloat @bf16_01() {
; CHECK-LABEL: @bf16_01(
-; CHECK-NEXT: ret bfloat 0xRBF80
+; CHECK-NEXT: ret bfloat f0xBF80
;
%x = call bfloat @llvm.copysign.bf16(bfloat 1.0, bfloat -2.0)
ret bfloat %x
@@ -65,7 +65,7 @@ define bfloat @bf16_01() {
define bfloat @bf16_02() {
; CHECK-LABEL: @bf16_02(
-; CHECK-NEXT: ret bfloat 0xR4000
+; CHECK-NEXT: ret bfloat f0x4000
;
%x = call bfloat @llvm.copysign.bf16(bfloat -2.0, bfloat 1.0)
ret bfloat %x
@@ -73,7 +73,7 @@ define bfloat @bf16_02() {
define bfloat @bf16_03() {
; CHECK-LABEL: @bf16_03(
-; CHECK-NEXT: ret bfloat 0xRC000
+; CHECK-NEXT: ret bfloat f0xC000
;
%x = call bfloat @llvm.copysign.bf16(bfloat -2.0, bfloat -1.0)
ret bfloat %x
@@ -81,48 +81,48 @@ define bfloat @bf16_03() {
define fp128 @f128_01() {
; CHECK-LABEL: @f128_01(
-; CHECK-NEXT: ret fp128 0xL00000000000000008000000000000001
+; CHECK-NEXT: ret fp128 f0x80000000000000010000000000000000
;
- %x = call fp128 @llvm.copysign.f128(fp128 0xL00000000000000000000000000000001, fp128 0xL00000000000000008000000000000002)
+ %x = call fp128 @llvm.copysign.f128(fp128 f0x00000000000000010000000000000000, fp128 f0x80000000000000020000000000000000)
ret fp128 %x
}
define fp128 @f128_02() {
; CHECK-LABEL: @f128_02(
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000003
+; CHECK-NEXT: ret fp128 f0x00000000000000030000000000000000
;
- %x = call fp128 @llvm.copysign.f128(fp128 0xL00000000000000008000000000000003, fp128 0xL00000000000000000000000000000004)
+ %x = call fp128 @llvm.copysign.f128(fp128 f0x80000000000000030000000000000000, fp128 f0x00000000000000040000000000000000)
ret fp128 %x
}
define fp128 @f128_03() {
; CHECK-LABEL: @f128_03(
-; CHECK-NEXT: ret fp128 0xL00000000000000008000000000000005
+; CHECK-NEXT: ret fp128 f0x80000000000000050000000000000000
;
- %x = call fp128 @llvm.copysign.f128(fp128 0xL00000000000000008000000000000005, fp128 0xL00000000000000008000000000000006)
+ %x = call fp128 @llvm.copysign.f128(fp128 f0x80000000000000050000000000000000, fp128 f0x80000000000000060000000000000000)
ret fp128 %x
}
define ppc_fp128 @ppc128_01() {
; CHECK-LABEL: @ppc128_01(
-; CHECK-NEXT: ret ppc_fp128 0xM80000000000000008000000000000001
+; CHECK-NEXT: ret ppc_fp128 f0x80000000000000018000000000000000
;
- %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 0xM00000000000000000000000000000001, ppc_fp128 0xM80000000000000000000000000000002)
+ %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 f0x00000000000000010000000000000000, ppc_fp128 f0x00000000000000028000000000000000)
ret ppc_fp128 %x
}
define ppc_fp128 @ppc128_02() {
; CHECK-LABEL: @ppc128_02(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000008000000000000003
+; CHECK-NEXT: ret ppc_fp128 f0x80000000000000030000000000000000
;
- %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 0xM80000000000000000000000000000003, ppc_fp128 0xM00000000000000000000000000000004)
+ %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 f0x00000000000000038000000000000000, ppc_fp128 f0x00000000000000040000000000000000)
ret ppc_fp128 %x
}
define ppc_fp128 @ppc128_03() {
; CHECK-LABEL: @ppc128_03(
-; CHECK-NEXT: ret ppc_fp128 0xM80000000000000000000000000000005
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000058000000000000000
;
- %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 0xM80000000000000000000000000000005, ppc_fp128 0xM80000000000000000000000000000006)
+ %x = call ppc_fp128 @llvm.copysign.ppcf128(ppc_fp128 f0x00000000000000058000000000000000, ppc_fp128 f0x00000000000000068000000000000000)
ret ppc_fp128 %x
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/libfunc.ll b/llvm/test/Transforms/InstSimplify/ConstProp/libfunc.ll
index 348d90225a1a78..5aa5318f42817b 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/libfunc.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/libfunc.ll
@@ -7,9 +7,9 @@ declare double @sin(x86_fp80)
define double @PR50960(x86_fp80 %0) {
; CHECK-LABEL: @PR50960(
-; CHECK-NEXT: [[CALL:%.*]] = call double @sin(x86_fp80 0xK3FFF8000000000000000)
+; CHECK-NEXT: [[CALL:%.*]] = call double @sin(x86_fp80 f0x3FFF8000000000000000)
; CHECK-NEXT: ret double [[CALL]]
;
- %call = call double @sin(x86_fp80 0xK3FFF8000000000000000)
+ %call = call double @sin(x86_fp80 f0x3FFF8000000000000000)
ret double %call
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/loads.ll b/llvm/test/Transforms/InstSimplify/ConstProp/loads.ll
index dd75560e25ceda..8731d769e08e01 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/loads.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/loads.ll
@@ -113,10 +113,10 @@ define i128 @test_i128() {
define fp128 @test_fp128() {
; LE-LABEL: @test_fp128(
-; LE-NEXT: ret fp128 0xL000000000000007B0000000006B1BFF8
+; LE-NEXT: ret fp128 f0x0000000006B1BFF8000000000000007B
;
; BE-LABEL: @test_fp128(
-; BE-NEXT: ret fp128 0xL0000000006B1BFF8000000000000007B
+; BE-NEXT: ret fp128 f0x000000000000007B0000000006B1BFF8
;
%r = load fp128, ptr @g3
ret fp128 %r
@@ -135,10 +135,10 @@ define ppc_fp128 @test_ppc_fp128() {
define x86_fp80 @test_x86_fp80() {
; LE-LABEL: @test_x86_fp80(
-; LE-NEXT: ret x86_fp80 0xKFFFF000000000000007B
+; LE-NEXT: ret x86_fp80 f0xFFFF000000000000007B
;
; BE-LABEL: @test_x86_fp80(
-; BE-NEXT: ret x86_fp80 0xK000000000000007B0000
+; BE-NEXT: ret x86_fp80 f0x000000000000007B0000
;
%r = load x86_fp80, ptr @g3
ret x86_fp80 %r
@@ -146,10 +146,10 @@ define x86_fp80 @test_x86_fp80() {
define bfloat @test_bfloat() {
; LE-LABEL: @test_bfloat(
-; LE-NEXT: ret bfloat 0xR007B
+; LE-NEXT: ret bfloat f0x007B
;
; BE-LABEL: @test_bfloat(
-; BE-NEXT: ret bfloat 0xR0000
+; BE-NEXT: ret bfloat f0x0000
;
%r = load bfloat, ptr @g3
ret bfloat %r
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/logf128.ll b/llvm/test/Transforms/InstSimplify/ConstProp/logf128.ll
index 82db5e4066cb1b..f0d8c018986b89 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/logf128.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/logf128.ll
@@ -7,66 +7,66 @@ declare fp128 @logl(fp128)
define fp128 @log_e_64(){
; CHECK-LABEL: define fp128 @log_e_64() {
-; CHECK-NEXT: ret fp128 0xL300000000000000040010A2B23F3BAB7
+; CHECK-NEXT: ret fp128 f0x40010A2B23F3BAB73000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000004005000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x40050000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_smallest_positive_subnormal_number(){
; CHECK-LABEL: define fp128 @log_e_smallest_positive_subnormal_number() {
-; CHECK-NEXT: ret fp128 0xL3000000000000000C00C654628220780
+; CHECK-NEXT: ret fp128 f0xC00C6546282207803000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000010000000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x00000000000000000000000000000001)
ret fp128 %A
}
define fp128 @log_e_largest_subnormal_number(){
; CHECK-LABEL: define fp128 @log_e_largest_subnormal_number() {
-; CHECK-NEXT: ret fp128 0xLD000000000000000C00C62D918CE2421
+; CHECK-NEXT: ret fp128 f0xC00C62D918CE2421D000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xLFFFFFFFFFFFFFFFF0000FFFFFFFFFFFF)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x0000FFFFFFFFFFFFFFFFFFFFFFFFFFFF)
ret fp128 %A
}
define fp128 @log_e_smallest_positive_normal_number(){
;
; CHECK-LABEL: define fp128 @log_e_smallest_positive_normal_number() {
-; CHECK-NEXT: ret fp128 0xLD000000000000000C00C62D918CE2421
+; CHECK-NEXT: ret fp128 f0xC00C62D918CE2421D000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000000001000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x00010000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_largest_normal_number(){
; CHECK-LABEL: define fp128 @log_e_largest_normal_number() {
-; CHECK-NEXT: ret fp128 0xLF000000000000000400C62E42FEFA39E
+; CHECK-NEXT: ret fp128 f0x400C62E42FEFA39EF000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xLFFFFFFFFFFFFFFFF7FFEFFFFFFFFFFFF)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x7FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF)
ret fp128 %A
}
define fp128 @log_e_largest_number_less_than_one(){
; CHECK-LABEL: define fp128 @log_e_largest_number_less_than_one() {
-; CHECK-NEXT: ret fp128 0xL0000000000000000BF8E000000000000
+; CHECK-NEXT: ret fp128 f0xBF8E0000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xLFFFFFFFFFFFFFFFF3FFEFFFFFFFFFFFF)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x3FFEFFFFFFFFFFFFFFFFFFFFFFFFFFFF)
ret fp128 %A
}
define fp128 @log_e_1(){
; CHECK-LABEL: define fp128 @log_e_1() {
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000003FFF000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x3FFF0000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_smallest_number_larger_than_one(){
; CHECK-LABEL: define fp128 @log_e_smallest_number_larger_than_one() {
-; CHECK-NEXT: ret fp128 0xL00000000000000003F8F000000000000
+; CHECK-NEXT: ret fp128 f0x3F8F0000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000013FFF000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x3FFF0000000000000000000000000001)
ret fp128 %A
}
@@ -74,31 +74,31 @@ define fp128 @log_e_negative_2(){
; CHECK-LABEL: define fp128 @log_e_negative_2() {
; CHECK-NEXT: ret fp128 0xL0000000000000000{{[7|F]}}FFF800000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL0000000000000000C000000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0xC0000000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_0(){
; CHECK-LABEL: define fp128 @log_e_0() {
-; CHECK-NEXT: ret fp128 0xL0000000000000000FFFF000000000000
+; CHECK-NEXT: ret fp128 f0xFFFF0000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000000000000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x00000000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_negative_0(){
; CHECK-LABEL: define fp128 @log_e_negative_0() {
-; CHECK-NEXT: ret fp128 0xL0000000000000000FFFF000000000000
+; CHECK-NEXT: ret fp128 f0xFFFF0000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000008000000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x80000000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_infinity(){
; CHECK-LABEL: define fp128 @log_e_infinity() {
-; CHECK-NEXT: ret fp128 0xL00000000000000007FFF000000000000
+; CHECK-NEXT: ret fp128 f0x7FFF0000000000000000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000007FFF000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x7FFF0000000000000000000000000000)
ret fp128 %A
}
@@ -106,15 +106,15 @@ define fp128 @log_e_negative_infinity(){
; CHECK-LABEL: define fp128 @log_e_negative_infinity() {
; CHECK-NEXT: ret fp128 0xL0000000000000000{{[7|F]}}FFF800000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL0000000000000000FFFF000000000000)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0xFFFF0000000000000000000000000000)
ret fp128 %A
}
define fp128 @log_e_nan(){
; CHECK-LABEL: define fp128 @log_e_nan() {
-; CHECK-NEXT: ret fp128 0xL00000000000000007FFF800000000001
+; CHECK-NEXT: ret fp128 f0x7FFF8000000000010000000000000000
;
- %A = call fp128 @llvm.log.f128(fp128 noundef 0xL00000000000000007FFF000000000001)
+ %A = call fp128 @llvm.log.f128(fp128 noundef f0x7FFF0000000000010000000000000000)
ret fp128 %A
}
@@ -122,52 +122,52 @@ define <2 x fp128> @log_e_negative_2_vector(){
; CHECK-LABEL: define <2 x fp128> @log_e_negative_2_vector() {
; CHECK-NEXT: ret <2 x fp128> <fp128 0xL0000000000000000{{[7|F]}}FFF800000000000, fp128 0xL0000000000000000{{[7|F]}}FFF800000000000>
;
- %A = call <2 x fp128> @llvm.log.v2f128(<2 x fp128> <fp128 0xL0000000000000000C000000000000000, fp128 0xL0000000000000000C000000000000001>)
+ %A = call <2 x fp128> @llvm.log.v2f128(<2 x fp128> <fp128 f0xC0000000000000000000000000000000, fp128 f0xC0000000000000010000000000000000>)
ret <2 x fp128> %A
}
define fp128 @logl_e_64(){
; CHECK-LABEL: define fp128 @logl_e_64() {
-; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef 0xL00000000000000004005000000000000)
-; CHECK-NEXT: ret fp128 0xL300000000000000040010A2B23F3BAB7
+; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef f0x40050000000000000000000000000000)
+; CHECK-NEXT: ret fp128 f0x40010A2B23F3BAB73000000000000000
;
- %A = call fp128 @logl(fp128 noundef 0xL00000000000000004005000000000000)
+ %A = call fp128 @logl(fp128 noundef f0x40050000000000000000000000000000)
ret fp128 %A
}
define fp128 @logl_e_0(){
; CHECK-LABEL: define fp128 @logl_e_0() {
-; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef 0xL00000000000000000000000000000000)
+; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef f0x00000000000000000000000000000000)
; CHECK-NEXT: ret fp128 [[A]]
;
- %A = call fp128 @logl(fp128 noundef 0xL00000000000000000000000000000000)
+ %A = call fp128 @logl(fp128 noundef f0x00000000000000000000000000000000)
ret fp128 %A
}
define fp128 @logl_e_infinity(){
; CHECK-LABEL: define fp128 @logl_e_infinity() {
-; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef 0xL00000000000000007FFF000000000000)
-; CHECK-NEXT: ret fp128 0xL00000000000000007FFF000000000000
+; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef f0x7FFF0000000000000000000000000000)
+; CHECK-NEXT: ret fp128 f0x7FFF0000000000000000000000000000
;
- %A = call fp128 @logl(fp128 noundef 0xL00000000000000007FFF000000000000)
+ %A = call fp128 @logl(fp128 noundef f0x7FFF0000000000000000000000000000)
ret fp128 %A
}
define fp128 @logl_e_nan(){
; CHECK-LABEL: define fp128 @logl_e_nan() {
-; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef 0xL00000000000000007FFF000000000001)
+; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef f0x7FFF0000000000010000000000000000)
; CHECK-NEXT: ret fp128 [[A]]
;
- %A = call fp128 @logl(fp128 noundef 0xL00000000000000007FFF000000000001)
+ %A = call fp128 @logl(fp128 noundef f0x7FFF0000000000010000000000000000)
ret fp128 %A
}
define fp128 @logl_e_negative_2(){
; CHECK-LABEL: define fp128 @logl_e_negative_2() {
-; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef 0xL0000000000000000C000000000000000)
+; CHECK-NEXT: [[A:%.*]] = call fp128 @logl(fp128 noundef f0xC0000000000000000000000000000000)
; CHECK-NEXT: ret fp128 [[A]]
;
- %A = call fp128 @logl(fp128 noundef 0xL0000000000000000C000000000000000)
+ %A = call fp128 @logl(fp128 noundef f0xC0000000000000000000000000000000)
ret fp128 %A
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/min-max.ll b/llvm/test/Transforms/InstSimplify/ConstProp/min-max.ll
index 9120649eb5c4f1..d36d12343cda2c 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/min-max.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/min-max.ll
@@ -83,7 +83,7 @@ define float @minnum_float_qnan_p0() {
define bfloat @minnum_bfloat() {
; CHECK-LABEL: @minnum_bfloat(
-; CHECK-NEXT: ret bfloat 0xR40A0
+; CHECK-NEXT: ret bfloat f0x40A0
;
%1 = call bfloat @llvm.minnum.bf16(bfloat 5.0, bfloat 42.0)
ret bfloat %1
@@ -91,7 +91,7 @@ define bfloat @minnum_bfloat() {
define half @minnum_half() {
; CHECK-LABEL: @minnum_half(
-; CHECK-NEXT: ret half 0xH4500
+; CHECK-NEXT: ret half f0x4500
;
%1 = call half @llvm.minnum.f16(half 5.0, half 42.0)
ret half %1
@@ -109,7 +109,7 @@ define <4 x float> @minnum_float_vec() {
define <4 x bfloat> @minnum_bfloat_vec() {
; CHECK-LABEL: @minnum_bfloat_vec(
-; CHECK-NEXT: ret <4 x bfloat> <bfloat 0xR7FC0, bfloat 0xR40A0, bfloat 0xR4228, bfloat 0xR40A0>
+; CHECK-NEXT: ret <4 x bfloat> <bfloat f0x7FC0, bfloat f0x40A0, bfloat f0x4228, bfloat f0x40A0>
;
%1 = call <4 x bfloat> @llvm.minnum.v4bf16(<4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 0x7FF8000000000000, bfloat 42., bfloat 42.>, <4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 5., bfloat 0x7FF8000000000000, bfloat 5.>)
ret <4 x bfloat> %1
@@ -117,7 +117,7 @@ define <4 x bfloat> @minnum_bfloat_vec() {
define <4 x half> @minnum_half_vec() {
; CHECK-LABEL: @minnum_half_vec(
-; CHECK-NEXT: ret <4 x half> <half 0xH7E00, half 0xH4500, half 0xH5140, half 0xH4500>
+; CHECK-NEXT: ret <4 x half> <half f0x7E00, half f0x4500, half f0x5140, half f0x4500>
;
%1 = call <4 x half> @llvm.minnum.v4f16(<4 x half> <half 0x7FF8000000000000, half 0x7FF8000000000000, half 42., half 42.>, <4 x half> <half 0x7FF8000000000000, half 5., half 0x7FF8000000000000, half 5.>)
ret <4 x half> %1
@@ -175,7 +175,7 @@ define float @maxnum_float_qnan_p0() {
define bfloat @maxnum_bfloat() {
; CHECK-LABEL: @maxnum_bfloat(
-; CHECK-NEXT: ret bfloat 0xR4228
+; CHECK-NEXT: ret bfloat f0x4228
;
%1 = call bfloat @llvm.maxnum.bf16(bfloat 5.0, bfloat 42.0)
ret bfloat %1
@@ -183,7 +183,7 @@ define bfloat @maxnum_bfloat() {
define half @maxnum_half() {
; CHECK-LABEL: @maxnum_half(
-; CHECK-NEXT: ret half 0xH5140
+; CHECK-NEXT: ret half f0x5140
;
%1 = call half @llvm.maxnum.f16(half 5.0, half 42.0)
ret half %1
@@ -201,7 +201,7 @@ define <4 x float> @maxnum_float_vec() {
define <4 x bfloat> @maxnum_bfloat_vec() {
; CHECK-LABEL: @maxnum_bfloat_vec(
-; CHECK-NEXT: ret <4 x bfloat> <bfloat 0xR7FC0, bfloat 0xR40A0, bfloat 0xR4228, bfloat 0xR4228>
+; CHECK-NEXT: ret <4 x bfloat> <bfloat f0x7FC0, bfloat f0x40A0, bfloat f0x4228, bfloat f0x4228>
;
%1 = call <4 x bfloat> @llvm.maxnum.v4bf16(<4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 0x7FF8000000000000, bfloat 42., bfloat 42.>, <4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 5., bfloat 0x7FF8000000000000, bfloat 5.>)
ret <4 x bfloat> %1
@@ -209,7 +209,7 @@ define <4 x bfloat> @maxnum_bfloat_vec() {
define <4 x half> @maxnum_half_vec() {
; CHECK-LABEL: @maxnum_half_vec(
-; CHECK-NEXT: ret <4 x half> <half 0xH7E00, half 0xH4500, half 0xH5140, half 0xH5140>
+; CHECK-NEXT: ret <4 x half> <half f0x7E00, half f0x4500, half f0x5140, half f0x5140>
;
%1 = call <4 x half> @llvm.maxnum.v4f16(<4 x half> <half 0x7FF8000000000000, half 0x7FF8000000000000, half 42., half 42.>, <4 x half> <half 0x7FF8000000000000, half 5., half 0x7FF8000000000000, half 5.>)
ret <4 x half> %1
@@ -235,7 +235,7 @@ define float @minimum_float() {
define bfloat @minimum_bfloat() {
; CHECK-LABEL: @minimum_bfloat(
-; CHECK-NEXT: ret bfloat 0xR40A0
+; CHECK-NEXT: ret bfloat f0x40A0
;
%1 = call bfloat @llvm.minimum.bf16(bfloat 5.0, bfloat 42.0)
ret bfloat %1
@@ -243,7 +243,7 @@ define bfloat @minimum_bfloat() {
define half @minimum_half() {
; CHECK-LABEL: @minimum_half(
-; CHECK-NEXT: ret half 0xH4500
+; CHECK-NEXT: ret half f0x4500
;
%1 = call half @llvm.minimum.f16(half 5.0, half 42.0)
ret half %1
@@ -261,7 +261,7 @@ define <4 x float> @minimum_float_vec() {
define <4 x bfloat> @minimum_bfloat_vec() {
; CHECK-LABEL: @minimum_bfloat_vec(
-; CHECK-NEXT: ret <4 x bfloat> <bfloat 0xR7FC0, bfloat 0xR7FC0, bfloat 0xR7FC0, bfloat 0xR40A0>
+; CHECK-NEXT: ret <4 x bfloat> <bfloat f0x7FC0, bfloat f0x7FC0, bfloat f0x7FC0, bfloat f0x40A0>
;
%1 = call <4 x bfloat> @llvm.minimum.v4bf16(<4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 0x7FF8000000000000, bfloat 42., bfloat 42.>, <4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 5., bfloat 0x7FF8000000000000, bfloat 5.>)
ret <4 x bfloat> %1
@@ -269,7 +269,7 @@ define <4 x bfloat> @minimum_bfloat_vec() {
define <4 x half> @minimum_half_vec() {
; CHECK-LABEL: @minimum_half_vec(
-; CHECK-NEXT: ret <4 x half> <half 0xH7E00, half 0xH7E00, half 0xH7E00, half 0xH4500>
+; CHECK-NEXT: ret <4 x half> <half f0x7E00, half f0x7E00, half f0x7E00, half f0x4500>
;
%1 = call <4 x half> @llvm.minimum.v4f16(<4 x half> <half 0x7FF8000000000000, half 0x7FF8000000000000, half 42., half 42.>, <4 x half> <half 0x7FF8000000000000, half 5., half 0x7FF8000000000000, half 5.>)
ret <4 x half> %1
@@ -295,7 +295,7 @@ define float @maximum_float() {
define bfloat @maximum_bfloat() {
; CHECK-LABEL: @maximum_bfloat(
-; CHECK-NEXT: ret bfloat 0xR4228
+; CHECK-NEXT: ret bfloat f0x4228
;
%1 = call bfloat @llvm.maximum.bf16(bfloat 5.0, bfloat 42.0)
ret bfloat %1
@@ -303,7 +303,7 @@ define bfloat @maximum_bfloat() {
define half @maximum_half() {
; CHECK-LABEL: @maximum_half(
-; CHECK-NEXT: ret half 0xH5140
+; CHECK-NEXT: ret half f0x5140
;
%1 = call half @llvm.maximum.f16(half 5.0, half 42.0)
ret half %1
@@ -321,7 +321,7 @@ define <4 x float> @maximum_float_vec() {
define <4 x bfloat> @maximum_bfloat_vec() {
; CHECK-LABEL: @maximum_bfloat_vec(
-; CHECK-NEXT: ret <4 x bfloat> <bfloat 0xR7FC0, bfloat 0xR7FC0, bfloat 0xR7FC0, bfloat 0xR4228>
+; CHECK-NEXT: ret <4 x bfloat> <bfloat f0x7FC0, bfloat f0x7FC0, bfloat f0x7FC0, bfloat f0x4228>
;
%1 = call <4 x bfloat> @llvm.maximum.v4bf16(<4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 0x7FF8000000000000, bfloat 42., bfloat 42.>, <4 x bfloat> <bfloat 0x7FF8000000000000, bfloat 5., bfloat 0x7FF8000000000000, bfloat 5.>)
ret <4 x bfloat> %1
@@ -329,7 +329,7 @@ define <4 x bfloat> @maximum_bfloat_vec() {
define <4 x half> @maximum_half_vec() {
; CHECK-LABEL: @maximum_half_vec(
-; CHECK-NEXT: ret <4 x half> <half 0xH7E00, half 0xH7E00, half 0xH7E00, half 0xH5140>
+; CHECK-NEXT: ret <4 x half> <half f0x7E00, half f0x7E00, half f0x7E00, half f0x5140>
;
%1 = call <4 x half> @llvm.maximum.v4f16(<4 x half> <half 0x7FF8000000000000, half 0x7FF8000000000000, half 42., half 42.>, <4 x half> <half 0x7FF8000000000000, half 5., half 0x7FF8000000000000, half 5.>)
ret <4 x half> %1
diff --git a/llvm/test/Transforms/InstSimplify/bitcast-vector-fold.ll b/llvm/test/Transforms/InstSimplify/bitcast-vector-fold.ll
index d2656e291547cf..fb8c0c59c5f7ee 100644
--- a/llvm/test/Transforms/InstSimplify/bitcast-vector-fold.ll
+++ b/llvm/test/Transforms/InstSimplify/bitcast-vector-fold.ll
@@ -55,7 +55,7 @@ define i32 @test7() {
; CHECK-LABEL: @test7(
; CHECK-NEXT: ret i32 1118464
;
- %tmp3 = bitcast <2 x half> <half 0xH1100, half 0xH0011> to i32
+ %tmp3 = bitcast <2 x half> <half f0x1100, half f0x0011> to i32
ret i32 %tmp3
}
diff --git a/llvm/test/Transforms/InstSimplify/canonicalize.ll b/llvm/test/Transforms/InstSimplify/canonicalize.ll
index 9d2bdd1b853e61..2ea8f391051ff0 100644
--- a/llvm/test/Transforms/InstSimplify/canonicalize.ll
+++ b/llvm/test/Transforms/InstSimplify/canonicalize.ll
@@ -397,7 +397,7 @@ define double @canonicalize_ninf_f64() {
define half @canonicalize_zero_f16() {
; CHECK-LABEL: @canonicalize_zero_f16(
-; CHECK-NEXT: ret half 0xH0000
+; CHECK-NEXT: ret half f0x0000
;
%ret = call half @llvm.canonicalize.f16(half 0.0)
ret half %ret
@@ -405,7 +405,7 @@ define half @canonicalize_zero_f16() {
define half @canonicalize_1.0_f16() {
; CHECK-LABEL: @canonicalize_1.0_f16(
-; CHECK-NEXT: ret half 0xH3C00
+; CHECK-NEXT: ret half f0x3C00
;
%ret = call half @llvm.canonicalize.f16(half 1.0)
ret half %ret
@@ -413,25 +413,25 @@ define half @canonicalize_1.0_f16() {
define half @canonicalize_0x0001_f16() {
; CHECK-LABEL: @canonicalize_0x0001_f16(
-; CHECK-NEXT: ret half 0xH0001
+; CHECK-NEXT: ret half f0x0001
;
- %ret = call half @llvm.canonicalize.f16(half 0xH0001)
+ %ret = call half @llvm.canonicalize.f16(half f0x0001)
ret half %ret
}
define half @canonicalize_inf_f16() {
; CHECK-LABEL: @canonicalize_inf_f16(
-; CHECK-NEXT: ret half 0xH7C00
+; CHECK-NEXT: ret half f0x7C00
;
- %ret = call half @llvm.canonicalize.f16(half 0xH7C00)
+ %ret = call half @llvm.canonicalize.f16(half f0x7C00)
ret half %ret
}
define half @canonicalize_neg_inf_f16() {
; CHECK-LABEL: @canonicalize_neg_inf_f16(
-; CHECK-NEXT: ret half 0xHFC00
+; CHECK-NEXT: ret half f0xFC00
;
- %ret = call half @llvm.canonicalize.f16(half 0xHFC00)
+ %ret = call half @llvm.canonicalize.f16(half f0xFC00)
ret half %ret
}
@@ -441,50 +441,50 @@ define half @canonicalize_neg_inf_f16() {
define fp128 @canonicalize_zero_fp128() {
; CHECK-LABEL: @canonicalize_zero_fp128(
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000000
+; CHECK-NEXT: ret fp128 f0x00000000000000000000000000000000
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL00000000000000000000000000000000)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0x00000000000000000000000000000000)
ret fp128 %ret
}
define fp128 @canonicalize_1.0_fp128() {
; CHECK-LABEL: @canonicalize_1.0_fp128(
-; CHECK-NEXT: ret fp128 0xL00000000000000003FFF000000000000
+; CHECK-NEXT: ret fp128 f0x3FFF0000000000000000000000000000
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL00000000000000003FFF000000000000)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0x3FFF0000000000000000000000000000)
ret fp128 %ret
}
define fp128 @canonicalize_0x00000000000000000000000000000001_fp128() {
; CHECK-LABEL: @canonicalize_0x00000000000000000000000000000001_fp128(
-; CHECK-NEXT: ret fp128 0xL00000000000000000000000000000001
+; CHECK-NEXT: ret fp128 f0x00000000000000010000000000000000
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL00000000000000000000000000000001)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0x00000000000000010000000000000000)
ret fp128 %ret
}
define fp128 @canonicalize_inf_fp128() {
; CHECK-LABEL: @canonicalize_inf_fp128(
-; CHECK-NEXT: ret fp128 0xL00000000000000007FFF000000000000
+; CHECK-NEXT: ret fp128 f0x7FFF0000000000000000000000000000
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL00000000000000007FFF000000000000)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0x7FFF0000000000000000000000000000)
ret fp128 %ret
}
define fp128 @canonicalize_neg_inf_fp128() {
; CHECK-LABEL: @canonicalize_neg_inf_fp128(
-; CHECK-NEXT: ret fp128 0xL0000000000000000FFFF000000000000
+; CHECK-NEXT: ret fp128 f0xFFFF0000000000000000000000000000
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL0000000000000000FFFF000000000000)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0xFFFF0000000000000000000000000000)
ret fp128 %ret
}
define fp128 @canonicalize_nan_fp128() {
; CHECK-LABEL: @canonicalize_nan_fp128(
-; CHECK-NEXT: [[RET:%.*]] = call fp128 @llvm.canonicalize.f128(fp128 0xL00000000000000007FFF800000000000)
+; CHECK-NEXT: [[RET:%.*]] = call fp128 @llvm.canonicalize.f128(fp128 f0x7FFF8000000000000000000000000000)
; CHECK-NEXT: ret fp128 [[RET]]
;
- %ret = call fp128 @llvm.canonicalize.fp128(fp128 0xL00000000000000007FFF800000000000)
+ %ret = call fp128 @llvm.canonicalize.fp128(fp128 f0x7FFF8000000000000000000000000000)
ret fp128 %ret
}
@@ -494,7 +494,7 @@ define fp128 @canonicalize_nan_fp128() {
define bfloat @canonicalize_zero_bf16() {
; CHECK-LABEL: @canonicalize_zero_bf16(
-; CHECK-NEXT: ret bfloat 0xR0000
+; CHECK-NEXT: ret bfloat f0x0000
;
%ret = call bfloat @llvm.canonicalize.bf16(bfloat 0.0)
ret bfloat %ret
@@ -502,7 +502,7 @@ define bfloat @canonicalize_zero_bf16() {
define bfloat @canonicalize_1.0_bf16() {
; CHECK-LABEL: @canonicalize_1.0_bf16(
-; CHECK-NEXT: ret bfloat 0xR3F80
+; CHECK-NEXT: ret bfloat f0x3F80
;
%ret = call bfloat @llvm.canonicalize.bf16(bfloat 1.0)
ret bfloat %ret
@@ -510,42 +510,42 @@ define bfloat @canonicalize_1.0_bf16() {
define bfloat @canonicalize_0x0001_bf16() {
; CHECK-LABEL: @canonicalize_0x0001_bf16(
-; CHECK-NEXT: ret bfloat 0xR0001
+; CHECK-NEXT: ret bfloat f0x0001
;
- %ret = call bfloat @llvm.canonicalize.bf16(bfloat 0xR0001)
+ %ret = call bfloat @llvm.canonicalize.bf16(bfloat f0x0001)
ret bfloat %ret
}
define bfloat @canonicalize_inf_bf16() {
; CHECK-LABEL: @canonicalize_inf_bf16(
-; CHECK-NEXT: ret bfloat 0xR7F80
+; CHECK-NEXT: ret bfloat f0x7F80
;
- %ret = call bfloat @llvm.canonicalize.bf16(bfloat 0xR7F80)
+ %ret = call bfloat @llvm.canonicalize.bf16(bfloat f0x7F80)
ret bfloat %ret
}
define bfloat @canonicalize_neg_inf_bf16() {
; CHECK-LABEL: @canonicalize_neg_inf_bf16(
-; CHECK-NEXT: ret bfloat 0xRFF80
+; CHECK-NEXT: ret bfloat f0xFF80
;
- %ret = call bfloat @llvm.canonicalize.bf16(bfloat 0xRFF80)
+ %ret = call bfloat @llvm.canonicalize.bf16(bfloat f0xFF80)
ret bfloat %ret
}
define bfloat @canonicalize_nan_bf16() {
; CHECK-LABEL: @canonicalize_nan_bf16(
-; CHECK-NEXT: [[RET:%.*]] = call bfloat @llvm.canonicalize.bf16(bfloat 0xR7FC0)
+; CHECK-NEXT: [[RET:%.*]] = call bfloat @llvm.canonicalize.bf16(bfloat f0x7FC0)
; CHECK-NEXT: ret bfloat [[RET]]
;
- %ret = call bfloat @llvm.canonicalize.bf16(bfloat 0xR7FC0)
+ %ret = call bfloat @llvm.canonicalize.bf16(bfloat f0x7FC0)
ret bfloat %ret
}
define bfloat @canonicalize_0xff_bf16() {
; CHECK-LABEL: @canonicalize_0xff_bf16(
-; CHECK-NEXT: ret bfloat 0xR00FF
+; CHECK-NEXT: ret bfloat f0x00FF
;
- %ret = call bfloat @llvm.canonicalize.bf16(bfloat 0xR00FF)
+ %ret = call bfloat @llvm.canonicalize.bf16(bfloat f0x00FF)
ret bfloat %ret
}
@@ -563,7 +563,7 @@ define x86_fp80 @canonicalize_poison_f80() {
define x86_fp80 @canonicalize_undef_f80() {
; CHECK-LABEL: @canonicalize_undef_f80(
-; CHECK-NEXT: ret x86_fp80 0xK00000000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x00000000000000000000
;
%ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 undef)
ret x86_fp80 %ret
@@ -571,80 +571,80 @@ define x86_fp80 @canonicalize_undef_f80() {
define x86_fp80 @canonicalize_zero_f80() {
; CHECK-LABEL: @canonicalize_zero_f80(
-; CHECK-NEXT: ret x86_fp80 0xK00000000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x00000000000000000000
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK00000000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x00000000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_negzero_f80() {
; CHECK-LABEL: @canonicalize_negzero_f80(
-; CHECK-NEXT: ret x86_fp80 0xK80000000000000000000
+; CHECK-NEXT: ret x86_fp80 f0x80000000000000000000
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK80000000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x80000000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_inf_f80() {
; CHECK-LABEL: @canonicalize_inf_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK7FFF8000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x7FFF8000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK7FFF8000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x7FFF8000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_ninf_f80() {
; CHECK-LABEL: @canonicalize_ninf_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFF8000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFF8000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFF8000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFF8000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_qnan_f80() {
; CHECK-LABEL: @canonicalize_qnan_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFFC000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFFC000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFFC000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFFC000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_snan_f80() {
; CHECK-LABEL: @canonicalize_snan_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFFE000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFFE000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKFFFFE000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xFFFFE000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_1.0_f80() {
; CHECK-LABEL: @canonicalize_1.0_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK3FFF8000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x3FFF8000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK3FFF8000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x3FFF8000000000000000)
ret x86_fp80 %ret
}
define x86_fp80 @canonicalize_neg1.0_f80() {
; CHECK-LABEL: @canonicalize_neg1.0_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKBFFF8000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xBFFF8000000000000000)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xKBFFF8000000000000000)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0xBFFF8000000000000000)
ret x86_fp80 %ret
}
-define x86_fp80 @canonicalize_0xK00000000000000000001_f80() {
-; CHECK-LABEL: @canonicalize_0xK00000000000000000001_f80(
-; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK00000000000000000001)
+define x86_fp80 @canonicalize_f0x00000000000000000001_f80() {
+; CHECK-LABEL: @canonicalize_f0x00000000000000000001_f80(
+; CHECK-NEXT: [[RET:%.*]] = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x00000000000000000001)
; CHECK-NEXT: ret x86_fp80 [[RET]]
;
- %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 0xK00000000000000000001)
+ %ret = call x86_fp80 @llvm.canonicalize.f80(x86_fp80 f0x00000000000000000001)
ret x86_fp80 %ret
}
@@ -662,7 +662,7 @@ define ppc_fp128 @canonicalize_poison_ppcf128() {
define ppc_fp128 @canonicalize_undef_ppcf128() {
; CHECK-LABEL: @canonicalize_undef_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000000000000000000000
;
%ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 undef)
ret ppc_fp128 %ret
@@ -670,95 +670,95 @@ define ppc_fp128 @canonicalize_undef_ppcf128() {
define ppc_fp128 @canonicalize_zero_ppcf128() {
; CHECK-LABEL: @canonicalize_zero_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000000000000000000000
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM00000000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000000000000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_negzero_ppcf128() {
; CHECK-LABEL: @canonicalize_negzero_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM80000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000008000000000000000
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM80000000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000008000000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_noncanonical_zero_0_ppcf128() {
; CHECK-LABEL: @canonicalize_noncanonical_zero_0_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000000000000000000000
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM0000000000000000ffffffffffffffff)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0xffffffffffffffff0000000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_noncanonical_zero_1_ppcf128() {
; CHECK-LABEL: @canonicalize_noncanonical_zero_1_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM00000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000000000000000000000
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM00000000000000000000000000000001)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000010000000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_noncanonical_negzero_0_ppcf128() {
; CHECK-LABEL: @canonicalize_noncanonical_negzero_0_ppcf128(
-; CHECK-NEXT: ret ppc_fp128 0xM80000000000000000000000000000000
+; CHECK-NEXT: ret ppc_fp128 f0x00000000000000008000000000000000
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM8000000000000000ffffffffffffffff)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0xffffffffffffffff8000000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_inf_ppcf128() {
; CHECK-LABEL: @canonicalize_inf_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM7FF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000007FF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM7FF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000007FF0000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_neginf_ppcf128() {
; CHECK-LABEL: @canonicalize_neginf_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFF0000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_qnan_ppcf128() {
; CHECK-LABEL: @canonicalize_qnan_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFF80000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFF8000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFF80000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFF8000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_snan_ppcf128() {
; CHECK-LABEL: @canonicalize_snan_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFFC0000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFFC000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMFFFC0000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000FFFC000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_1.0_ppcf128() {
; CHECK-LABEL: @canonicalize_1.0_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM3FF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000003FF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xM3FF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x00000000000000003FF0000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_neg1.0_ppcf128() {
; CHECK-LABEL: @canonicalize_neg1.0_ppcf128(
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMBFF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000BFF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 0xMBFF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.canonicalize.ppcf128(ppc_fp128 f0x0000000000000000BFF0000000000000)
ret ppc_fp128 %ret
}
diff --git a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
index a9ef7f6a765d19..44895a8297f923 100644
--- a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
+++ b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
@@ -339,7 +339,7 @@ entry:
define half @fadd_10() #0 {
; CHECK-LABEL: @fadd_10(
; CHECK-NEXT: entry:
-; CHECK-NEXT: ret half 0xH4200
+; CHECK-NEXT: ret half f0x4200
;
entry:
%result = call half @llvm.experimental.constrained.fadd.f16(half 1.0, half 2.0, metadata !"round.tonearest", metadata !"fpexcept.ignore") #0
@@ -349,7 +349,7 @@ entry:
define bfloat @fadd_11() #0 {
; CHECK-LABEL: @fadd_11(
; CHECK-NEXT: entry:
-; CHECK-NEXT: ret bfloat 0xR4040
+; CHECK-NEXT: ret bfloat f0x4040
;
entry:
%result = call bfloat @llvm.experimental.constrained.fadd.bf16(bfloat 1.0, bfloat 2.0, metadata !"round.tonearest", metadata !"fpexcept.ignore") #0
diff --git a/llvm/test/Transforms/InstSimplify/exp10.ll b/llvm/test/Transforms/InstSimplify/exp10.ll
index a546bb1255d854..af42926d3dc5b4 100644
--- a/llvm/test/Transforms/InstSimplify/exp10.ll
+++ b/llvm/test/Transforms/InstSimplify/exp10.ll
@@ -235,28 +235,28 @@ define float @exp10_neg_denorm() {
define ppc_fp128 @exp10_one_ppcf128() {
; CHECK-LABEL: define ppc_fp128 @exp10_one_ppcf128() {
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xM3FF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x00000000000000003FF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xM3FF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x00000000000000003FF0000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @exp10_negone_ppcf128() {
; CHECK-LABEL: define ppc_fp128 @exp10_negone_ppcf128() {
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xMBFF00000000000000000000000000000)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x0000000000000000BFF0000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xMBFF00000000000000000000000000000)
+ %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x0000000000000000BFF0000000000000)
ret ppc_fp128 %ret
}
define ppc_fp128 @canonicalize_noncanonical_zero_1_ppcf128() {
; CHECK-LABEL: define ppc_fp128 @canonicalize_noncanonical_zero_1_ppcf128() {
-; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xM00000000000000000000000000000001)
+; CHECK-NEXT: [[RET:%.*]] = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x00000000000000010000000000000000)
; CHECK-NEXT: ret ppc_fp128 [[RET]]
;
- %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 0xM00000000000000000000000000000001)
+ %ret = call ppc_fp128 @llvm.exp10.ppcf128(ppc_fp128 f0x00000000000000010000000000000000)
ret ppc_fp128 %ret
}
diff --git a/llvm/test/Transforms/InstSimplify/floating-point-arithmetic.ll b/llvm/test/Transforms/InstSimplify/floating-point-arithmetic.ll
index d3178a103d42cd..cf85d3c51e33a0 100644
--- a/llvm/test/Transforms/InstSimplify/floating-point-arithmetic.ll
+++ b/llvm/test/Transforms/InstSimplify/floating-point-arithmetic.ll
@@ -172,7 +172,7 @@ define double @fmul_X_1(double %a) {
define half @fmul_nnan_ninf_nneg_0.0(i15 %x) {
; CHECK-LABEL: @fmul_nnan_ninf_nneg_0.0(
-; CHECK-NEXT: ret half 0xH0000
+; CHECK-NEXT: ret half f0x0000
;
%f = uitofp i15 %x to half
%r = fmul half %f, 0.0
@@ -181,7 +181,7 @@ define half @fmul_nnan_ninf_nneg_0.0(i15 %x) {
define half @fmul_nnan_ninf_nneg_n0.0(i15 %x) {
; CHECK-LABEL: @fmul_nnan_ninf_nneg_n0.0(
-; CHECK-NEXT: ret half 0xH8000
+; CHECK-NEXT: ret half f0x8000
;
%f = uitofp i15 %x to half
%r = fmul half %f, -0.0
@@ -193,7 +193,7 @@ define half @fmul_nnan_ninf_nneg_n0.0(i15 %x) {
define half @fmul_nnan_nneg_0.0(i16 %x) {
; CHECK-LABEL: @fmul_nnan_nneg_0.0(
; CHECK-NEXT: [[F:%.*]] = uitofp i16 [[X:%.*]] to half
-; CHECK-NEXT: [[R:%.*]] = fmul half [[F]], 0xH0000
+; CHECK-NEXT: [[R:%.*]] = fmul half [[F]], f0x0000
; CHECK-NEXT: ret half [[R]]
;
%f = uitofp i16 %x to half
diff --git a/llvm/test/Transforms/InstSimplify/fp-nan.ll b/llvm/test/Transforms/InstSimplify/fp-nan.ll
index fe3a8c68674c5e..c28af077fd0f45 100644
--- a/llvm/test/Transforms/InstSimplify/fp-nan.ll
+++ b/llvm/test/Transforms/InstSimplify/fp-nan.ll
@@ -139,17 +139,17 @@ define <vscale x 1 x double> @fdivl_nan_op0_scalable_vec(<vscale x 1 x double> %
define <2 x half> @fdiv_nan_op1(<2 x half> %x) {
; CHECK-LABEL: @fdiv_nan_op1(
-; CHECK-NEXT: ret <2 x half> <half 0xH7FFF, half 0xHFF00>
+; CHECK-NEXT: ret <2 x half> <half f0x7FFF, half f0xFF00>
;
- %r = fdiv <2 x half> %x, <half 0xH7FFF, half 0xHFF00>
+ %r = fdiv <2 x half> %x, <half f0x7FFF, half f0xFF00>
ret <2 x half> %r
}
define <vscale x 1 x half> @fdiv_nan_op1_scalable_vec(<vscale x 1 x half> %x) {
; CHECK-LABEL: @fdiv_nan_op1_scalable_vec(
-; CHECK-NEXT: ret <vscale x 1 x half> splat (half 0xH7FFF)
+; CHECK-NEXT: ret <vscale x 1 x half> splat (half f0x7FFF)
;
- %r = fdiv <vscale x 1 x half> %x, splat (half 0xH7FFF)
+ %r = fdiv <vscale x 1 x half> %x, splat (half f0x7FFF)
ret <vscale x 1 x half> %r
}
diff --git a/llvm/test/Transforms/InstSimplify/frexp.ll b/llvm/test/Transforms/InstSimplify/frexp.ll
index 34cfce92bac43e..c8ee6ec0235be4 100644
--- a/llvm/test/Transforms/InstSimplify/frexp.ll
+++ b/llvm/test/Transforms/InstSimplify/frexp.ll
@@ -228,25 +228,25 @@ define { float, i32 } @frexp_neg_denorm() {
define { ppc_fp128, i32 } @frexp_one_ppcf128() {
; CHECK-LABEL: define { ppc_fp128, i32 } @frexp_one_ppcf128() {
-; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 0xM3FE00000000000000000000000000000, i32 1 }
+; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 f0x00000000000000003FE0000000000000, i32 1 }
;
- %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 0xM3FF00000000000000000000000000000)
+ %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 f0x00000000000000003FF0000000000000)
ret { ppc_fp128, i32 } %ret
}
define { ppc_fp128, i32 } @frexp_negone_ppcf128() {
; CHECK-LABEL: define { ppc_fp128, i32 } @frexp_negone_ppcf128() {
-; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 0xMBFE00000000000000000000000000000, i32 1 }
+; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 f0x0000000000000000BFE0000000000000, i32 1 }
;
- %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 0xMBFF00000000000000000000000000000)
+ %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 f0x0000000000000000BFF0000000000000)
ret { ppc_fp128, i32 } %ret
}
define { ppc_fp128, i32} @canonicalize_noncanonical_zero_1_ppcf128() {
; CHECK-LABEL: define { ppc_fp128, i32 } @canonicalize_noncanonical_zero_1_ppcf128() {
-; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 0xM00000000000000000000000000000001, i32 0 }
+; CHECK-NEXT: ret { ppc_fp128, i32 } { ppc_fp128 f0x00000000000000010000000000000000, i32 0 }
;
- %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 0xM00000000000000000000000000000001)
+ %ret = call { ppc_fp128, i32 } @llvm.frexp.ppcf128.i32(ppc_fp128 f0x00000000000000010000000000000000)
ret { ppc_fp128, i32 } %ret
}
diff --git a/llvm/test/Transforms/InstSimplify/is_fpclass.ll b/llvm/test/Transforms/InstSimplify/is_fpclass.ll
index b14bfcbbfaac38..f2bdcacd373751 100644
--- a/llvm/test/Transforms/InstSimplify/is_fpclass.ll
+++ b/llvm/test/Transforms/InstSimplify/is_fpclass.ll
@@ -5,7 +5,7 @@ define <2 x i1> @f() {
; CHECK-LABEL: define <2 x i1> @f() {
; CHECK-NEXT: ret <2 x i1> zeroinitializer
;
- %i = call <2 x i1> @llvm.is.fpclass.v2f16(<2 x half> <half 0xH7C00, half 0xH7C00>, i32 3)
+ %i = call <2 x i1> @llvm.is.fpclass.v2f16(<2 x half> <half f0x7C00, half f0x7C00>, i32 3)
ret <2 x i1> %i
}
diff --git a/llvm/test/Transforms/InstSimplify/known-never-infinity.ll b/llvm/test/Transforms/InstSimplify/known-never-infinity.ll
index af83f00368597f..a43f64eebd5e54 100644
--- a/llvm/test/Transforms/InstSimplify/known-never-infinity.ll
+++ b/llvm/test/Transforms/InstSimplify/known-never-infinity.ll
@@ -10,7 +10,7 @@ define i1 @isKnownNeverInfinity_uitofp(i15 %x) {
; CHECK-NEXT: ret i1 true
;
%f = uitofp i15 %x to half
- %r = fcmp une half %f, 0xH7c00
+ %r = fcmp une half %f, f0x7c00
ret i1 %r
}
@@ -20,11 +20,11 @@ define i1 @isNotKnownNeverInfinity_uitofp(i16 %x) {
; CHECK-LABEL: define i1 @isNotKnownNeverInfinity_uitofp
; CHECK-SAME: (i16 [[X:%.*]]) {
; CHECK-NEXT: [[F:%.*]] = uitofp i16 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fcmp une half [[F]], 0xH7C00
+; CHECK-NEXT: [[R:%.*]] = fcmp une half [[F]], f0x7C00
; CHECK-NEXT: ret i1 [[R]]
;
%f = uitofp i16 %x to half
- %r = fcmp une half %f, 0xH7c00
+ %r = fcmp une half %f, f0x7c00
ret i1 %r
}
@@ -34,7 +34,7 @@ define i1 @isKnownNeverNegativeInfinity_uitofp(i15 %x) {
; CHECK-NEXT: ret i1 false
;
%f = uitofp i15 %x to half
- %r = fcmp oeq half %f, 0xHfc00
+ %r = fcmp oeq half %f, f0xfc00
ret i1 %r
}
@@ -46,7 +46,7 @@ define i1 @isNotKnownNeverNegativeInfinity_uitofp(i16 %x) {
; CHECK-NEXT: ret i1 false
;
%f = uitofp i16 %x to half
- %r = fcmp oeq half %f, 0xHfc00
+ %r = fcmp oeq half %f, f0xfc00
ret i1 %r
}
@@ -59,7 +59,7 @@ define i1 @isKnownNeverInfinity_sitofp(i16 %x) {
; CHECK-NEXT: ret i1 true
;
%f = sitofp i16 %x to half
- %r = fcmp une half %f, 0xH7c00
+ %r = fcmp une half %f, f0x7c00
ret i1 %r
}
@@ -69,11 +69,11 @@ define i1 @isNotKnownNeverInfinity_sitofp(i17 %x) {
; CHECK-LABEL: define i1 @isNotKnownNeverInfinity_sitofp
; CHECK-SAME: (i17 [[X:%.*]]) {
; CHECK-NEXT: [[F:%.*]] = sitofp i17 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fcmp une half [[F]], 0xH7C00
+; CHECK-NEXT: [[R:%.*]] = fcmp une half [[F]], f0x7C00
; CHECK-NEXT: ret i1 [[R]]
;
%f = sitofp i17 %x to half
- %r = fcmp une half %f, 0xH7c00
+ %r = fcmp une half %f, f0x7c00
ret i1 %r
}
@@ -83,7 +83,7 @@ define i1 @isKnownNeverNegativeInfinity_sitofp(i16 %x) {
; CHECK-NEXT: ret i1 false
;
%f = sitofp i16 %x to half
- %r = fcmp oeq half %f, 0xHfc00
+ %r = fcmp oeq half %f, f0xfc00
ret i1 %r
}
@@ -93,11 +93,11 @@ define i1 @isNotKnownNeverNegativeInfinity_sitofp(i17 %x) {
; CHECK-LABEL: define i1 @isNotKnownNeverNegativeInfinity_sitofp
; CHECK-SAME: (i17 [[X:%.*]]) {
; CHECK-NEXT: [[F:%.*]] = sitofp i17 [[X]] to half
-; CHECK-NEXT: [[R:%.*]] = fcmp oeq half [[F]], 0xHFC00
+; CHECK-NEXT: [[R:%.*]] = fcmp oeq half [[F]], f0xFC00
; CHECK-NEXT: ret i1 [[R]]
;
%f = sitofp i17 %x to half
- %r = fcmp oeq half %f, 0xHfc00
+ %r = fcmp oeq half %f, f0xfc00
ret i1 %r
}
@@ -444,12 +444,12 @@ define i1 @isKnownNeverInfinity_floor_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.floor.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.floor.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -458,12 +458,12 @@ define i1 @isKnownNeverInfinity_ceil_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.ceil.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.ceil.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -472,12 +472,12 @@ define i1 @isKnownNeverInfinity_rint_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.rint.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.rint.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -486,12 +486,12 @@ define i1 @isKnownNeverInfinity_nearbyint_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.nearbyint.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.nearbyint.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -500,12 +500,12 @@ define i1 @isKnownNeverInfinity_round_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.round.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.round.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -514,12 +514,12 @@ define i1 @isKnownNeverInfinity_roundeven_ppcf128(ppc_fp128 %x) {
; CHECK-SAME: (ppc_fp128 [[X:%.*]]) {
; CHECK-NEXT: [[A:%.*]] = fadd ninf ppc_fp128 [[X]], [[X]]
; CHECK-NEXT: [[E:%.*]] = call ppc_fp128 @llvm.roundeven.ppcf128(ppc_fp128 [[A]])
-; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], 0xM7FF00000000000000000000000000000
+; CHECK-NEXT: [[R:%.*]] = fcmp une ppc_fp128 [[E]], f0x00000000000000007FF0000000000000
; CHECK-NEXT: ret i1 [[R]]
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.roundeven.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -530,7 +530,7 @@ define i1 @isKnownNeverInfinity_trunc_ppcf128(ppc_fp128 %x) {
;
%a = fadd ninf ppc_fp128 %x, %x
%e = call ppc_fp128 @llvm.trunc.ppcf128(ppc_fp128 %a)
- %r = fcmp une ppc_fp128 %e, 0xM7FF00000000000000000000000000000
+ %r = fcmp une ppc_fp128 %e, f0x00000000000000007FF0000000000000
ret i1 %r
}
@@ -541,7 +541,7 @@ define i1 @isKnownNeverInfinity_ceil_x86_fp80(x86_fp80 %x) {
;
%a = fadd ninf x86_fp80 %x, %x
%e = call x86_fp80 @llvm.ceil.f80(x86_fp80 %a)
- %r = fcmp une x86_fp80 %e, 0xK7FFF8000000000000000
+ %r = fcmp une x86_fp80 %e, f0x7FFF8000000000000000
ret i1 %r
}
diff --git a/llvm/test/Transforms/InstSimplify/ldexp.ll b/llvm/test/Transforms/InstSimplify/ldexp.ll
index d39f6a1e49673f..bc3cea2b33dddd 100644
--- a/llvm/test/Transforms/InstSimplify/ldexp.ll
+++ b/llvm/test/Transforms/InstSimplify/ldexp.ll
@@ -419,9 +419,9 @@ define void @ldexp_f64() {
define void @ldexp_f16() {
; CHECK-LABEL: @ldexp_f16(
-; CHECK-NEXT: store volatile half 0xH4000, ptr addrspace(1) undef, align 2
-; CHECK-NEXT: store volatile half 0xH4400, ptr addrspace(1) undef, align 2
-; CHECK-NEXT: store volatile half 0xH7C00, ptr addrspace(1) undef, align 2
+; CHECK-NEXT: store volatile half f0x4000, ptr addrspace(1) undef, align 2
+; CHECK-NEXT: store volatile half f0x4400, ptr addrspace(1) undef, align 2
+; CHECK-NEXT: store volatile half f0x7C00, ptr addrspace(1) undef, align 2
; CHECK-NEXT: ret void
;
%one.one = call half @llvm.ldexp.f16.i32(half 1.0, i32 1)
@@ -438,26 +438,26 @@ define void @ldexp_f16() {
define void @ldexp_ppcf128() {
; CHECK-LABEL: @ldexp_ppcf128(
-; CHECK-NEXT: store volatile ppc_fp128 0xMFFF00000000000000000000000000000, ptr addrspace(1) undef, align 16
-; CHECK-NEXT: store volatile ppc_fp128 0xMFFFC0000000000000000000000000000, ptr addrspace(1) undef, align 16
-; CHECK-NEXT: store volatile ppc_fp128 0xM3FD00000000000000000000000000000, ptr addrspace(1) undef, align 16
-; CHECK-NEXT: store volatile ppc_fp128 0xM41700000000000000000000000000000, ptr addrspace(1) undef, align 16
-; CHECK-NEXT: store volatile ppc_fp128 0xMC0700000000000000000000000000000, ptr addrspace(1) undef, align 16
+; CHECK-NEXT: store volatile ppc_fp128 f0x0000000000000000FFF0000000000000, ptr addrspace(1) undef, align 16
+; CHECK-NEXT: store volatile ppc_fp128 f0x0000000000000000FFFC000000000000, ptr addrspace(1) undef, align 16
+; CHECK-NEXT: store volatile ppc_fp128 f0x00000000000000003FD0000000000000, ptr addrspace(1) undef, align 16
+; CHECK-NEXT: store volatile ppc_fp128 f0x00000000000000004170000000000000, ptr addrspace(1) undef, align 16
+; CHECK-NEXT: store volatile ppc_fp128 f0x0000000000000000C070000000000000, ptr addrspace(1) undef, align 16
; CHECK-NEXT: ret void
;
- %neginf = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 0xMFFF00000000000000000000000000000, i32 0)
+ %neginf = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 f0x0000000000000000FFF0000000000000, i32 0)
store volatile ppc_fp128 %neginf, ptr addrspace(1) undef
- %snan = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 0xMFFFC0000000000000000000000000000, i32 0)
+ %snan = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 f0x0000000000000000FFFC000000000000, i32 0)
store volatile ppc_fp128 %snan, ptr addrspace(1) undef
- %one.neg2 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 0xM3FF00000000000000000000000000000, i32 -2)
+ %one.neg2 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 f0x00000000000000003FF0000000000000, i32 -2)
store volatile ppc_fp128 %one.neg2, ptr addrspace(1) undef
- %one.24 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 0xM3FF00000000000000000000000000000, i32 24)
+ %one.24 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 f0x00000000000000003FF0000000000000, i32 24)
store volatile ppc_fp128 %one.24, ptr addrspace(1) undef
- %negone.8 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 0xMBFF00000000000000000000000000000, i32 8)
+ %negone.8 = call ppc_fp128 @llvm.ldexp.ppcf128.i32(ppc_fp128 f0x0000000000000000BFF0000000000000, i32 8)
store volatile ppc_fp128 %negone.8, ptr addrspace(1) undef
ret void
diff --git a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll
index 55f311d9a2fca4..9ad9189c9f6c44 100644
--- a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll
+++ b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll
@@ -82,7 +82,7 @@ define amdgpu_kernel void @merge_global_store_2_constants_i16_align_1(ptr addrsp
define amdgpu_kernel void @merge_global_store_2_constants_half_natural_align(ptr addrspace(1) %out) #0 {
; CHECK-LABEL: @merge_global_store_2_constants_half_natural_align(
-; CHECK-NEXT: store <2 x half> <half 0xH3C00, half 0xH4000>, ptr addrspace(1) [[OUT:%.*]], align 2
+; CHECK-NEXT: store <2 x half> <half f0x3C00, half f0x4000>, ptr addrspace(1) [[OUT:%.*]], align 2
; CHECK-NEXT: ret void
;
%out.gep.1 = getelementptr half, ptr addrspace(1) %out, i32 1
@@ -94,7 +94,7 @@ define amdgpu_kernel void @merge_global_store_2_constants_half_natural_align(ptr
define amdgpu_kernel void @merge_global_store_2_constants_half_align_1(ptr addrspace(1) %out) #0 {
; CHECK-LABEL: @merge_global_store_2_constants_half_align_1(
-; CHECK-NEXT: store <2 x half> <half 0xH3C00, half 0xH4000>, ptr addrspace(1) [[OUT:%.*]], align 1
+; CHECK-NEXT: store <2 x half> <half f0x3C00, half f0x4000>, ptr addrspace(1) [[OUT:%.*]], align 1
; CHECK-NEXT: ret void
;
%out.gep.1 = getelementptr half, ptr addrspace(1) %out, i32 1
diff --git a/llvm/test/Transforms/LoopLoadElim/type-mismatch-opaque-ptr.ll b/llvm/test/Transforms/LoopLoadElim/type-mismatch-opaque-ptr.ll
index e6a8af60f12872..c9487675ff8350 100644
--- a/llvm/test/Transforms/LoopLoadElim/type-mismatch-opaque-ptr.ll
+++ b/llvm/test/Transforms/LoopLoadElim/type-mismatch-opaque-ptr.ll
@@ -217,7 +217,7 @@ define void @f4(ptr noalias %A, ptr noalias %B, ptr noalias %C, i64 %N) {
; CHECK-NEXT: [[STORE_FORWARD_CAST]] = bitcast i32 [[A_P1]] to <2 x half>
; CHECK-NEXT: store i32 [[A_P1]], ptr [[AIDX_NEXT]], align 4
; CHECK-NEXT: [[A:%.*]] = load <2 x half>, ptr [[AIDX]], align 4
-; CHECK-NEXT: [[C:%.*]] = fmul <2 x half> [[STORE_FORWARDED]], splat (half 0xH4000)
+; CHECK-NEXT: [[C:%.*]] = fmul <2 x half> [[STORE_FORWARDED]], splat (half f0x4000)
; CHECK-NEXT: [[C_INT:%.*]] = bitcast <2 x half> [[C]] to i32
; CHECK-NEXT: store i32 [[C_INT]], ptr [[CIDX]], align 4
; CHECK-NEXT: [[EXITCOND:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], [[N:%.*]]
diff --git a/llvm/test/Transforms/LoopLoadElim/type-mismatch.ll b/llvm/test/Transforms/LoopLoadElim/type-mismatch.ll
index 56b910eebea92f..7105f8f363398f 100644
--- a/llvm/test/Transforms/LoopLoadElim/type-mismatch.ll
+++ b/llvm/test/Transforms/LoopLoadElim/type-mismatch.ll
@@ -217,7 +217,7 @@ define void @f4(ptr noalias %A, ptr noalias %B, ptr noalias %C, i64 %N) {
; CHECK-NEXT: [[STORE_FORWARD_CAST]] = bitcast i32 [[A_P1]] to <2 x half>
; CHECK-NEXT: store i32 [[A_P1]], ptr [[AIDX_NEXT]], align 4
; CHECK-NEXT: [[A:%.*]] = load <2 x half>, ptr [[AIDX]], align 4
-; CHECK-NEXT: [[C:%.*]] = fmul <2 x half> [[STORE_FORWARDED]], splat (half 0xH4000)
+; CHECK-NEXT: [[C:%.*]] = fmul <2 x half> [[STORE_FORWARDED]], splat (half f0x4000)
; CHECK-NEXT: [[C_INT:%.*]] = bitcast <2 x half> [[C]] to i32
; CHECK-NEXT: store i32 [[C_INT]], ptr [[CIDX]], align 4
; CHECK-NEXT: [[EXITCOND:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], [[N:%.*]]
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-reductions.ll b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-reductions.ll
index 11cc9715867739..0be2fbf493f603 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-reductions.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-reductions.ll
@@ -231,7 +231,7 @@ define bfloat @fadd_fast_bfloat(ptr noalias nocapture readonly %a, i64 %n) {
; CHECK: %[[FADD2:.*]] = fadd fast <8 x bfloat> %[[LOAD2]]
; CHECK: middle.block:
; CHECK: %[[RDX:.*]] = fadd fast <8 x bfloat> %[[FADD2]], %[[FADD1]]
-; CHECK: call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat 0xR0000, <8 x bfloat> %[[RDX]])
+; CHECK: call fast bfloat @llvm.vector.reduce.fadd.v8bf16(bfloat f0x0000, <8 x bfloat> %[[RDX]])
entry:
br label %for.body
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/scalar_interleave.ll b/llvm/test/Transforms/LoopVectorize/AArch64/scalar_interleave.ll
index 079aeb54ebd879..cf170b1c6f2d40 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/scalar_interleave.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/scalar_interleave.ll
@@ -51,7 +51,7 @@ define void @arm_correlate_f16(ptr nocapture noundef readonly %pSrcA, i32 nounde
; CHECK-NEXT: br label [[FOR_BODY16:%.*]]
; CHECK: for.body16:
; CHECK-NEXT: [[J_074:%.*]] = phi i32 [ 0, [[FOR_COND14_PREHEADER]] ], [ [[INC:%.*]], [[FOR_INC:%.*]] ]
-; CHECK-NEXT: [[SUM_073:%.*]] = phi half [ 0xH0000, [[FOR_COND14_PREHEADER]] ], [ [[SUM_1:%.*]], [[FOR_INC]] ]
+; CHECK-NEXT: [[SUM_073:%.*]] = phi half [ f0x0000, [[FOR_COND14_PREHEADER]] ], [ [[SUM_1:%.*]], [[FOR_INC]] ]
; CHECK-NEXT: [[SUB17:%.*]] = sub i32 [[I_077]], [[J_074]]
; CHECK-NEXT: [[CMP18:%.*]] = icmp ult i32 [[SUB17]], [[SRCBLEN_ADDR_0]]
; CHECK-NEXT: [[CMP19:%.*]] = icmp ult i32 [[J_074]], [[SRCALEN_ADDR_0]]
@@ -130,7 +130,7 @@ for.cond14.preheader: ; preds = %if.end12, %for.end
for.body16: ; preds = %for.cond14.preheader, %for.inc
%j.074 = phi i32 [ 0, %for.cond14.preheader ], [ %inc, %for.inc ]
- %sum.073 = phi half [ 0xH0000, %for.cond14.preheader ], [ %sum.1, %for.inc ]
+ %sum.073 = phi half [ f0x0000, %for.cond14.preheader ], [ %sum.1, %for.inc ]
%sub17 = sub i32 %i.077, %j.074
%cmp18 = icmp ult i32 %sub17, %srcBLen.addr.0
%cmp19 = icmp ult i32 %j.074, %srcALen.addr.0
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/sve-illegal-type.ll b/llvm/test/Transforms/LoopVectorize/AArch64/sve-illegal-type.ll
index cf1dd467647fec..46aa8ebdaa76fb 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/sve-illegal-type.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/sve-illegal-type.ll
@@ -36,8 +36,8 @@ define dso_local void @loop_sve_f128(ptr nocapture %ptr, i64 %N) {
; CHECK: vector.body
; CHECK: %[[LOAD1:.*]] = load fp128, ptr
; CHECK-NEXT: %[[LOAD2:.*]] = load fp128, ptr
-; CHECK-NEXT: %[[FSUB1:.*]] = fsub fp128 %[[LOAD1]], 0xL00000000000000008000000000000000
-; CHECK-NEXT: %[[FSUB2:.*]] = fsub fp128 %[[LOAD2]], 0xL00000000000000008000000000000000
+; CHECK-NEXT: %[[FSUB1:.*]] = fsub fp128 %[[LOAD1]], f0x80000000000000000000000000000000
+; CHECK-NEXT: %[[FSUB2:.*]] = fsub fp128 %[[LOAD2]], f0x80000000000000000000000000000000
; CHECK-NEXT: store fp128 %[[FSUB1]], ptr {{.*}}
; CHECK-NEXT: store fp128 %[[FSUB2]], ptr {{.*}}
entry:
@@ -47,7 +47,7 @@ for.body:
%iv = phi i64 [ 0, %entry ], [ %iv.next, %for.body ]
%arrayidx = getelementptr inbounds fp128, ptr %ptr, i64 %iv
%0 = load fp128, ptr %arrayidx, align 16
- %add = fsub fp128 %0, 0xL00000000000000008000000000000000
+ %add = fsub fp128 %0, f0x80000000000000000000000000000000
store fp128 %add, ptr %arrayidx, align 16
%iv.next = add nuw nsw i64 %iv, 1
%exitcond.not = icmp eq i64 %iv.next, %N
diff --git a/llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll b/llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll
index ab7bb667f3f369..de2bf573b5aefc 100644
--- a/llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll
+++ b/llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll
@@ -24,7 +24,7 @@ define half @vectorize_v2f16_loop(ptr addrspace(1) noalias %s) {
; GFX9-NEXT: br i1 [[TMP4]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
; GFX9: middle.block:
; GFX9-NEXT: [[BIN_RDX:%.*]] = fadd fast <2 x half> [[TMP3]], [[TMP2]]
-; GFX9-NEXT: [[TMP5:%.*]] = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> [[BIN_RDX]])
+; GFX9-NEXT: [[TMP5:%.*]] = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> [[BIN_RDX]])
; GFX9-NEXT: br i1 true, label [[FOR_END:%.*]], label [[SCALAR_PH]]
; GFX9: scalar.ph:
; GFX9-NEXT: br label [[FOR_BODY:%.*]]
@@ -54,7 +54,7 @@ define half @vectorize_v2f16_loop(ptr addrspace(1) noalias %s) {
; VI-NEXT: br i1 [[TMP4]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
; VI: middle.block:
; VI-NEXT: [[BIN_RDX:%.*]] = fadd fast <2 x half> [[TMP3]], [[TMP2]]
-; VI-NEXT: [[TMP5:%.*]] = call fast half @llvm.vector.reduce.fadd.v2f16(half 0xH0000, <2 x half> [[BIN_RDX]])
+; VI-NEXT: [[TMP5:%.*]] = call fast half @llvm.vector.reduce.fadd.v2f16(half f0x0000, <2 x half> [[BIN_RDX]])
; VI-NEXT: br i1 true, label [[FOR_END:%.*]], label [[SCALAR_PH]]
; VI: scalar.ph:
; VI-NEXT: br label [[FOR_BODY:%.*]]
@@ -69,7 +69,7 @@ define half @vectorize_v2f16_loop(ptr addrspace(1) noalias %s) {
; CI-NEXT: br label [[FOR_BODY:%.*]]
; CI: for.body:
; CI-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CI-NEXT: [[Q_04:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; CI-NEXT: [[Q_04:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; CI-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds nuw half, ptr addrspace(1) [[S:%.*]], i64 [[INDVARS_IV]]
; CI-NEXT: [[TMP0:%.*]] = load half, ptr addrspace(1) [[ARRAYIDX]], align 2
; CI-NEXT: [[ADD]] = fadd fast half [[Q_04]], [[TMP0]]
diff --git a/llvm/test/Transforms/LoopVectorize/ARM/mve-known-trip-count.ll b/llvm/test/Transforms/LoopVectorize/ARM/mve-known-trip-count.ll
index e796e40a7591ec..2354089060b80d 100644
--- a/llvm/test/Transforms/LoopVectorize/ARM/mve-known-trip-count.ll
+++ b/llvm/test/Transforms/LoopVectorize/ARM/mve-known-trip-count.ll
@@ -611,7 +611,7 @@ while.body: ; preds = %while.body.preheade
%incdec.ptr = getelementptr inbounds i8, ptr %pIn.07, i32 2
%0 = load i16, ptr %pIn.07, align 2
%conv1 = sitofp i16 %0 to half
- %1 = fmul fast half %conv1, 0xH0200
+ %1 = fmul fast half %conv1, f0x0200
%incdec.ptr2 = getelementptr inbounds i8, ptr %pDst.addr.06, i32 2
store half %1, ptr %pDst.addr.06, align 2
%dec = add nsw i32 %blkCnt.08, -1
diff --git a/llvm/test/Transforms/LoopVectorize/ARM/tail-folding-not-allowed.ll b/llvm/test/Transforms/LoopVectorize/ARM/tail-folding-not-allowed.ll
index 0bac1630673067..df485f2c2cdb5a 100644
--- a/llvm/test/Transforms/LoopVectorize/ARM/tail-folding-not-allowed.ll
+++ b/llvm/test/Transforms/LoopVectorize/ARM/tail-folding-not-allowed.ll
@@ -451,7 +451,7 @@ define void @fptrunc_not_allowed(ptr noalias nocapture %A, ptr noalias nocapture
; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds float, ptr [[TMP6]], i32 0
; CHECK-NEXT: store <4 x float> [[TMP5]], ptr [[TMP7]], align 4
; CHECK-NEXT: [[TMP8:%.*]] = fptrunc <4 x float> [[TMP5]] to <4 x half>
-; CHECK-NEXT: [[TMP9:%.*]] = fmul fast <4 x half> [[TMP8]], splat (half 0xH4000)
+; CHECK-NEXT: [[TMP9:%.*]] = fmul fast <4 x half> [[TMP8]], splat (half f0x4000)
; CHECK-NEXT: [[TMP10:%.*]] = getelementptr inbounds half, ptr [[D:%.*]], i32 [[TMP0]]
; CHECK-NEXT: [[TMP11:%.*]] = getelementptr inbounds half, ptr [[TMP10]], i32 0
; CHECK-NEXT: store <4 x half> [[TMP9]], ptr [[TMP11]], align 2
@@ -475,7 +475,7 @@ define void @fptrunc_not_allowed(ptr noalias nocapture %A, ptr noalias nocapture
; CHECK-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[A]], i32 [[I_017]]
; CHECK-NEXT: store float [[ADD]], ptr [[ARRAYIDX2]], align 4
; CHECK-NEXT: [[CONV:%.*]] = fptrunc float [[ADD]] to half
-; CHECK-NEXT: [[FACTOR:%.*]] = fmul fast half [[CONV]], 0xH4000
+; CHECK-NEXT: [[FACTOR:%.*]] = fmul fast half [[CONV]], f0x4000
; CHECK-NEXT: [[ARRAYIDX5:%.*]] = getelementptr inbounds half, ptr [[D]], i32 [[I_017]]
; CHECK-NEXT: store half [[FACTOR]], ptr [[ARRAYIDX5]], align 2
; CHECK-NEXT: [[ADD6]] = add nuw nsw i32 [[I_017]], 1
@@ -498,7 +498,7 @@ for.body:
%arrayidx2 = getelementptr inbounds float, ptr %A, i32 %i.017
store float %add, ptr %arrayidx2, align 4
%conv = fptrunc float %add to half
- %factor = fmul fast half %conv, 0xH4000
+ %factor = fmul fast half %conv, f0x4000
%arrayidx5 = getelementptr inbounds half, ptr %D, i32 %i.017
store half %factor, ptr %arrayidx5, align 2
%add6 = add nuw nsw i32 %i.017, 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/illegal-type.ll b/llvm/test/Transforms/LoopVectorize/RISCV/illegal-type.ll
index eeef8f199353b8..418396c8c04e1f 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/illegal-type.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/illegal-type.ll
@@ -44,7 +44,7 @@ define dso_local void @loop_f128(ptr nocapture %ptr, i64 %N) {
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds fp128, ptr [[PTR:%.*]], i64 [[IV]]
; CHECK-NEXT: [[TMP0:%.*]] = load fp128, ptr [[ARRAYIDX]], align 16
-; CHECK-NEXT: [[ADD:%.*]] = fsub fp128 [[TMP0]], 0xL00000000000000008000000000000000
+; CHECK-NEXT: [[ADD:%.*]] = fsub fp128 [[TMP0]], f0x80000000000000000000000000000000
; CHECK-NEXT: store fp128 [[ADD]], ptr [[ARRAYIDX]], align 16
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
; CHECK-NEXT: [[EXITCOND_NOT:%.*]] = icmp eq i64 [[IV_NEXT]], [[N:%.*]]
@@ -59,7 +59,7 @@ for.body:
%iv = phi i64 [ 0, %entry ], [ %iv.next, %for.body ]
%arrayidx = getelementptr inbounds fp128, ptr %ptr, i64 %iv
%0 = load fp128, ptr %arrayidx, align 16
- %add = fsub fp128 %0, 0xL00000000000000008000000000000000
+ %add = fsub fp128 %0, f0x80000000000000000000000000000000
store fp128 %add, ptr %arrayidx, align 16
%iv.next = add nuw nsw i64 %iv, 1
%exitcond.not = icmp eq i64 %iv.next, %N
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/scalable-reductions.ll b/llvm/test/Transforms/LoopVectorize/RISCV/scalable-reductions.ll
index 01a2a757dea5dd..ada26540bb67e1 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/scalable-reductions.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/scalable-reductions.ll
@@ -234,7 +234,7 @@ define half @fadd_fast_half_zvfh(ptr noalias nocapture readonly %a, i64 %n) "tar
; CHECK: %[[FADD2:.*]] = fadd fast <vscale x 8 x half> %[[LOAD2]]
; CHECK: middle.block:
; CHECK: %[[RDX:.*]] = fadd fast <vscale x 8 x half> %[[FADD2]], %[[FADD1]]
-; CHECK: call fast half @llvm.vector.reduce.fadd.nxv8f16(half 0xH0000, <vscale x 8 x half> %[[RDX]])
+; CHECK: call fast half @llvm.vector.reduce.fadd.nxv8f16(half f0x0000, <vscale x 8 x half> %[[RDX]])
entry:
br label %for.body
@@ -263,7 +263,7 @@ define half @fadd_fast_half_zvfhmin(ptr noalias nocapture readonly %a, i64 %n) "
; CHECK: %[[FADD2:.*]] = fadd fast <16 x half> %[[LOAD2]]
; CHECK: middle.block:
; CHECK: %[[RDX:.*]] = fadd fast <16 x half> %[[FADD2]], %[[FADD1]]
-; CHECK: call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> %[[RDX]])
+; CHECK: call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> %[[RDX]])
entry:
br label %for.body
@@ -292,7 +292,7 @@ define bfloat @fadd_fast_bfloat(ptr noalias nocapture readonly %a, i64 %n) "targ
; CHECK: %[[FADD2:.*]] = fadd fast <16 x bfloat> %[[LOAD2]]
; CHECK: middle.block:
; CHECK: %[[RDX:.*]] = fadd fast <16 x bfloat> %[[FADD2]], %[[FADD1]]
-; CHECK: call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR0000, <16 x bfloat> %[[RDX]])
+; CHECK: call fast bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x0000, <16 x bfloat> %[[RDX]])
entry:
br label %for.body
@@ -496,7 +496,7 @@ define half @fmuladd_f16_zvfh(ptr %a, ptr %b, i64 %n) "target-features"="+zvfh"
; CHECK: [[MULADD2:%.*]] = call reassoc <vscale x 8 x half> @llvm.fmuladd.nxv8f16(<vscale x 8 x half> [[WIDE_LOAD2]], <vscale x 8 x half> [[WIDE_LOAD4]],
; CHECK: middle.block:
; CHECK: [[BIN_RDX:%.*]] = fadd reassoc <vscale x 8 x half> [[MULADD2]], [[MULADD1]]
-; CHECK: call reassoc half @llvm.vector.reduce.fadd.nxv8f16(half 0xH8000, <vscale x 8 x half> [[BIN_RDX]])
+; CHECK: call reassoc half @llvm.vector.reduce.fadd.nxv8f16(half f0x8000, <vscale x 8 x half> [[BIN_RDX]])
;
entry:
br label %for.body
@@ -533,7 +533,7 @@ define half @fmuladd_f16_zvfhmin(ptr %a, ptr %b, i64 %n) "target-features"="+zvf
; CHECK: [[MULADD2:%.*]] = call reassoc <16 x half> @llvm.fmuladd.v16f16(<16 x half> [[WIDE_LOAD2]], <16 x half> [[WIDE_LOAD4]],
; CHECK: middle.block:
; CHECK: [[BIN_RDX:%.*]] = fadd reassoc <16 x half> [[MULADD2]], [[MULADD1]]
-; CHECK: call reassoc half @llvm.vector.reduce.fadd.v16f16(half 0xH8000, <16 x half> [[BIN_RDX]])
+; CHECK: call reassoc half @llvm.vector.reduce.fadd.v16f16(half f0x8000, <16 x half> [[BIN_RDX]])
;
entry:
br label %for.body
@@ -567,7 +567,7 @@ define bfloat @fmuladd_bf16(ptr %a, ptr %b, i64 %n) "target-features"="+zvfbfmin
; CHECK: [[MULADD2:%.*]] = call reassoc <16 x bfloat> @llvm.fmuladd.v16bf16(<16 x bfloat> [[WIDE_LOAD2]], <16 x bfloat> [[WIDE_LOAD4]],
; CHECK: middle.block:
; CHECK: [[BIN_RDX:%.*]] = fadd reassoc <16 x bfloat> [[MULADD2]], [[MULADD1]]
-; CHECK: call reassoc bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat 0xR8000, <16 x bfloat> [[BIN_RDX]])
+; CHECK: call reassoc bfloat @llvm.vector.reduce.fadd.v16bf16(bfloat f0x8000, <16 x bfloat> [[BIN_RDX]])
;
entry:
br label %for.body
diff --git a/llvm/test/Transforms/LoopVectorize/X86/fp80-widest-type.ll b/llvm/test/Transforms/LoopVectorize/X86/fp80-widest-type.ll
index 2ef9d4b40d9a52..cd6b0874996f5d 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/fp80-widest-type.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/fp80-widest-type.ll
@@ -15,7 +15,7 @@ define x86_fp80 @test() {
; CHECK: for.body3.i.3:
; CHECK-NEXT: [[N_ADDR_112_I_3:%.*]] = phi i64 [ [[DEC_I_3:%.*]], [[FOR_BODY3_I_3]] ], [ 24, [[FOO_EXIT:%.*]] ]
; CHECK-NEXT: [[X_ADDR_111_I_3:%.*]] = phi x86_fp80 [ [[MUL_I_3:%.*]], [[FOR_BODY3_I_3]] ], [ undef, [[FOO_EXIT]] ]
-; CHECK-NEXT: [[MUL_I_3]] = fmul x86_fp80 [[X_ADDR_111_I_3]], 0xK40008000000000000000
+; CHECK-NEXT: [[MUL_I_3]] = fmul x86_fp80 [[X_ADDR_111_I_3]], f0x40008000000000000000
; CHECK-NEXT: [[DEC_I_3]] = add nsw i64 [[N_ADDR_112_I_3]], -1
; CHECK-NEXT: [[CMP2_I_3:%.*]] = icmp sgt i64 [[N_ADDR_112_I_3]], 1
; CHECK-NEXT: br i1 [[CMP2_I_3]], label [[FOR_BODY3_I_3]], label [[FOO_EXIT_3:%.*]]
@@ -29,7 +29,7 @@ foo.exit:
for.body3.i.3: ; preds = %for.body3.i.3, %foo.exit
%n.addr.112.i.3 = phi i64 [ %dec.i.3, %for.body3.i.3 ], [ 24, %foo.exit ]
%x.addr.111.i.3 = phi x86_fp80 [ %mul.i.3, %for.body3.i.3 ], [ undef, %foo.exit ]
- %mul.i.3 = fmul x86_fp80 %x.addr.111.i.3, 0xK40008000000000000000
+ %mul.i.3 = fmul x86_fp80 %x.addr.111.i.3, f0x40008000000000000000
%dec.i.3 = add nsw i64 %n.addr.112.i.3, -1
%cmp2.i.3 = icmp sgt i64 %n.addr.112.i.3, 1
br i1 %cmp2.i.3, label %for.body3.i.3, label %foo.exit.3
diff --git a/llvm/test/Transforms/LoopVectorize/X86/x86_fp80-vector-store.ll b/llvm/test/Transforms/LoopVectorize/X86/x86_fp80-vector-store.ll
index 921cf4246f7259..b16516fad14850 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/x86_fp80-vector-store.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/x86_fp80-vector-store.ll
@@ -16,8 +16,8 @@ define void @example() nounwind ssp uwtable {
; CHECK-NEXT: [[TMP0:%.*]] = or disjoint i64 [[INDEX]], 1
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr inbounds [1024 x x86_fp80], ptr @x, i64 0, i64 [[INDEX]]
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr inbounds [1024 x x86_fp80], ptr @x, i64 0, i64 [[TMP0]]
-; CHECK-NEXT: store x86_fp80 0xK3FFF8000000000000000, ptr [[TMP1]], align 16
-; CHECK-NEXT: store x86_fp80 0xK3FFF8000000000000000, ptr [[TMP2]], align 16
+; CHECK-NEXT: store x86_fp80 f0x3FFF8000000000000000, ptr [[TMP1]], align 16
+; CHECK-NEXT: store x86_fp80 f0x3FFF8000000000000000, ptr [[TMP2]], align 16
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 2
; CHECK-NEXT: [[TMP3:%.*]] = icmp eq i64 [[INDEX_NEXT]], 1024
; CHECK-NEXT: br i1 [[TMP3]], label [[FOR_END:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
diff --git a/llvm/test/Transforms/MemCpyOpt/2008-02-24-MultipleUseofSRet.ll b/llvm/test/Transforms/MemCpyOpt/2008-02-24-MultipleUseofSRet.ll
index 66e109ae0bca99..c60561ffb9d7d2 100644
--- a/llvm/test/Transforms/MemCpyOpt/2008-02-24-MultipleUseofSRet.ll
+++ b/llvm/test/Transforms/MemCpyOpt/2008-02-24-MultipleUseofSRet.ll
@@ -10,15 +10,15 @@ target triple = "i386-pc-linux-gnu"
define internal fastcc void @initialize(ptr noalias nocapture sret(%0) %agg.result) nounwind {
; CHECK-LABEL: @initialize(
; CHECK-NEXT: entry:
-; CHECK-NEXT: store x86_fp80 0xK00000000000000000000, ptr [[AGG_RESULT:%.*]], align 4
+; CHECK-NEXT: store x86_fp80 f0x00000000000000000000, ptr [[AGG_RESULT:%.*]], align 4
; CHECK-NEXT: [[AGG_RESULT_15:%.*]] = getelementptr [[TMP0:%.*]], ptr [[AGG_RESULT]], i32 0, i32 1
-; CHECK-NEXT: store x86_fp80 0xK00000000000000000000, ptr [[AGG_RESULT_15]], align 4
+; CHECK-NEXT: store x86_fp80 f0x00000000000000000000, ptr [[AGG_RESULT_15]], align 4
; CHECK-NEXT: ret void
;
entry:
- store x86_fp80 0xK00000000000000000000, ptr %agg.result
+ store x86_fp80 f0x00000000000000000000, ptr %agg.result
%agg.result.15 = getelementptr %0, ptr %agg.result, i32 0, i32 1
- store x86_fp80 0xK00000000000000000000, ptr %agg.result.15
+ store x86_fp80 f0x00000000000000000000, ptr %agg.result.15
ret void
}
diff --git a/llvm/test/Transforms/MemCpyOpt/memcpy-to-memset.ll b/llvm/test/Transforms/MemCpyOpt/memcpy-to-memset.ll
index 1858f306db9f3c..542a57a87bf9eb 100644
--- a/llvm/test/Transforms/MemCpyOpt/memcpy-to-memset.ll
+++ b/llvm/test/Transforms/MemCpyOpt/memcpy-to-memset.ll
@@ -77,7 +77,7 @@ define void @test_i1x16_one() nounwind {
ret void
}
- at half = internal constant half 0xH0000, align 4
+ at half = internal constant half f0x0000, align 4
define void @test_half() nounwind {
; CHECK-LABEL: @test_half(
; CHECK-NEXT: [[A:%.*]] = alloca half, align 4
diff --git a/llvm/test/Transforms/MemCpyOpt/memcpy.ll b/llvm/test/Transforms/MemCpyOpt/memcpy.ll
index 65d78f4199aa02..3099e07a13a9b6 100644
--- a/llvm/test/Transforms/MemCpyOpt/memcpy.ll
+++ b/llvm/test/Transforms/MemCpyOpt/memcpy.ll
@@ -24,7 +24,7 @@ define void @test1(ptr sret(%0) %agg.result, x86_fp80 %z.0, x86_fp80 %z.1) noun
; CHECK-NEXT: entry:
; CHECK-NEXT: [[TMP2:%.*]] = alloca [[TMP0:%.*]], align 16
; CHECK-NEXT: [[MEMTMP:%.*]] = alloca [[TMP0]], align 16
-; CHECK-NEXT: [[TMP5:%.*]] = fsub x86_fp80 0xK80000000000000000000, [[Z_1:%.*]]
+; CHECK-NEXT: [[TMP5:%.*]] = fsub x86_fp80 f0x80000000000000000000, [[Z_1:%.*]]
; CHECK-NEXT: call void @ccoshl(ptr sret([[TMP0]]) [[TMP2]], x86_fp80 [[TMP5]], x86_fp80 [[Z_0:%.*]]) #[[ATTR2:[0-9]+]]
; CHECK-NEXT: call void @llvm.memcpy.p0.p0.i32(ptr align 16 [[AGG_RESULT:%.*]], ptr align 16 [[TMP2]], i32 32, i1 false)
; CHECK-NEXT: ret void
@@ -32,7 +32,7 @@ define void @test1(ptr sret(%0) %agg.result, x86_fp80 %z.0, x86_fp80 %z.1) noun
entry:
%tmp2 = alloca %0
%memtmp = alloca %0, align 16
- %tmp5 = fsub x86_fp80 0xK80000000000000000000, %z.1
+ %tmp5 = fsub x86_fp80 f0x80000000000000000000, %z.1
call void @ccoshl(ptr sret(%0) %memtmp, x86_fp80 %tmp5, x86_fp80 %z.0) nounwind
call void @llvm.memcpy.p0.p0.i32(ptr align 16 %tmp2, ptr align 16 %memtmp, i32 32, i1 false)
call void @llvm.memcpy.p0.p0.i32(ptr align 16 %agg.result, ptr align 16 %tmp2, i32 32, i1 false)
diff --git a/llvm/test/Transforms/MemCpyOpt/sret.ll b/llvm/test/Transforms/MemCpyOpt/sret.ll
index 1d0f0934ec2da2..53978372bd65ab 100644
--- a/llvm/test/Transforms/MemCpyOpt/sret.ll
+++ b/llvm/test/Transforms/MemCpyOpt/sret.ll
@@ -13,7 +13,7 @@ define void @ccosl(ptr noalias writable sret(%0) %agg.result, ptr byval(%0) alig
; CHECK-NEXT: [[MEMTMP:%.*]] = alloca [[TMP0]], align 16
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr [[TMP0]], ptr [[Z:%.*]], i32 0, i32 1
; CHECK-NEXT: [[TMP2:%.*]] = load x86_fp80, ptr [[TMP1]], align 16
-; CHECK-NEXT: [[TMP3:%.*]] = fsub x86_fp80 0xK80000000000000000000, [[TMP2]]
+; CHECK-NEXT: [[TMP3:%.*]] = fsub x86_fp80 f0x80000000000000000000, [[TMP2]]
; CHECK-NEXT: [[TMP4:%.*]] = getelementptr [[TMP0]], ptr [[IZ]], i32 0, i32 1
; CHECK-NEXT: [[TMP8:%.*]] = load x86_fp80, ptr [[Z]], align 16
; CHECK-NEXT: store x86_fp80 [[TMP3]], ptr [[IZ]], align 16
@@ -26,7 +26,7 @@ entry:
%memtmp = alloca %0, align 16
%tmp1 = getelementptr %0, ptr %z, i32 0, i32 1
%tmp2 = load x86_fp80, ptr %tmp1, align 16
- %tmp3 = fsub x86_fp80 0xK80000000000000000000, %tmp2
+ %tmp3 = fsub x86_fp80 f0x80000000000000000000, %tmp2
%tmp4 = getelementptr %0, ptr %iz, i32 0, i32 1
%tmp8 = load x86_fp80, ptr %z, align 16
store x86_fp80 %tmp3, ptr %iz, align 16
diff --git a/llvm/test/Transforms/Reassociate/reassoc-intermediate-fnegs.ll b/llvm/test/Transforms/Reassociate/reassoc-intermediate-fnegs.ll
index 9b2bbf2483f869..f12a601b9de8b5 100644
--- a/llvm/test/Transforms/Reassociate/reassoc-intermediate-fnegs.ll
+++ b/llvm/test/Transforms/Reassociate/reassoc-intermediate-fnegs.ll
@@ -6,14 +6,14 @@
define half @faddsubAssoc1(half %a, half %b) {
; CHECK-LABEL: @faddsubAssoc1(
-; CHECK-NEXT: [[T2_NEG:%.*]] = fmul fast half [[A:%.*]], 0xHC500
-; CHECK-NEXT: [[REASS_MUL:%.*]] = fmul fast half [[B:%.*]], 0xH4500
+; CHECK-NEXT: [[T2_NEG:%.*]] = fmul fast half [[A:%.*]], f0xC500
+; CHECK-NEXT: [[REASS_MUL:%.*]] = fmul fast half [[B:%.*]], f0x4500
; CHECK-NEXT: [[T5:%.*]] = fadd fast half [[REASS_MUL]], [[T2_NEG]]
; CHECK-NEXT: ret half [[T5]]
;
- %t1 = fmul fast half %b, 0xH4200 ; 3*b
- %t2 = fmul fast half %a, 0xH4500 ; 5*a
- %t3 = fmul fast half %b, 0xH4000 ; 2*b
+ %t1 = fmul fast half %b, f0x4200 ; 3*b
+ %t2 = fmul fast half %a, f0x4500 ; 5*a
+ %t3 = fmul fast half %b, f0x4000 ; 2*b
%t4 = fsub fast half %t2, %t1 ; 5 * a - 3 * b
%t5 = fsub fast half %t3, %t4 ; 2 * b - ( 5 * a - 3 * b)
ret half %t5 ; = 5 * (b - a)
@@ -23,13 +23,13 @@ define half @faddsubAssoc1(half %a, half %b) {
define half @faddsubAssoc2(half %a, half %b) {
; CHECK-LABEL: @faddsubAssoc2(
-; CHECK-NEXT: [[T2:%.*]] = fmul fast half [[A:%.*]], 0xH4500
+; CHECK-NEXT: [[T2:%.*]] = fmul fast half [[A:%.*]], f0x4500
; CHECK-NEXT: [[T5:%.*]] = fadd fast half [[B:%.*]], [[T2]]
; CHECK-NEXT: ret half [[T5]]
;
- %t1 = fmul fast half %b, 0xH4200 ; 3*b
- %t2 = fmul fast half %a, 0xH4500 ; 5*a
- %t3 = fmul fast half %b, 0xH4000 ; 2*b
+ %t1 = fmul fast half %b, f0x4200 ; 3*b
+ %t2 = fmul fast half %a, f0x4500 ; 5*a
+ %t3 = fmul fast half %b, f0x4000 ; 2*b
%t4 = fadd fast half %t2, %t1 ; 5 * a + 3 * b
%t5 = fsub fast half %t4, %t3 ; (5 * a + 3 * b) - (2 * b)
ret half %t5 ; = 5 * a + b
diff --git a/llvm/test/Transforms/SCCP/fp-bc-icmp-const-fold.ll b/llvm/test/Transforms/SCCP/fp-bc-icmp-const-fold.ll
index 6a8b52d0ac4814..1315c9a5d66419 100644
--- a/llvm/test/Transforms/SCCP/fp-bc-icmp-const-fold.ll
+++ b/llvm/test/Transforms/SCCP/fp-bc-icmp-const-fold.ll
@@ -35,7 +35,7 @@ if.else14: ; preds = %if.end4
br label %do.body
do.body: ; preds = %do.body, %if.else14
- %scale.0 = phi ppc_fp128 [ 0xM3FF00000000000000000000000000000, %if.else14 ], [ %scale.0, %do.body ]
+ %scale.0 = phi ppc_fp128 [ f0x00000000000000003FF0000000000000, %if.else14 ], [ %scale.0, %do.body ]
br i1 %arg, label %do.body, label %if.then33
if.then33: ; preds = %do.body
diff --git a/llvm/test/Transforms/SCCP/pr50901.ll b/llvm/test/Transforms/SCCP/pr50901.ll
index d48d67532d88bd..ba7bf782666fa0 100644
--- a/llvm/test/Transforms/SCCP/pr50901.ll
+++ b/llvm/test/Transforms/SCCP/pr50901.ll
@@ -69,8 +69,8 @@
@g_5 = dso_local global i8 1, align 1, !dbg !16
@g_6 = dso_local global ptr null, align 8, !dbg !19
@g_7 = dso_local global ptr null, align 8, !dbg !23
- at g_8 = dso_local global half 0xH4321, align 4, !dbg !86
- at g_9 = dso_local global bfloat 0xR3F80, align 4, !dbg !90
+ at g_8 = dso_local global half f0x4321, align 4, !dbg !86
+ at g_9 = dso_local global bfloat f0x3F80, align 4, !dbg !90
@_ZL4g_11 = internal global i32 -5, align 4, !dbg !25
@_ZL4g_22 = internal global float 0x4016333340000000, align 4, !dbg !27
@_ZL4g_33 = internal global i8 98, align 1, !dbg !29
@@ -79,8 +79,8 @@
@_ZL4g_66 = internal global ptr null, align 8, !dbg !35
@_ZL4g_77 = internal global ptr inttoptr (i64 70 to ptr), align 8, !dbg !37
@g_float_undef = internal global float undef, align 4, !dbg !83
- at _ZL4g_88 = internal global half 0xH5678, align 4, !dbg !88
- at _ZL4g_99 = internal global bfloat 0xR5CAE, align 4, !dbg !92
+ at _ZL4g_88 = internal global half f0x5678, align 4, !dbg !88
+ at _ZL4g_99 = internal global bfloat f0x5CAE, align 4, !dbg !92
@g_i32_undef = internal global i32 undef, align 4, !dbg !95
@g_ptr_undef = internal global ptr undef, align 8, !dbg !97
diff --git a/llvm/test/Transforms/SCCP/sitofp.ll b/llvm/test/Transforms/SCCP/sitofp.ll
index 24f04ae1fccb91..52783ced1fa49e 100644
--- a/llvm/test/Transforms/SCCP/sitofp.ll
+++ b/llvm/test/Transforms/SCCP/sitofp.ll
@@ -14,7 +14,7 @@ define float @sitofp_and(i8 %x) {
define half @sitofp_const(i8 %x) {
; CHECK-LABEL: @sitofp_const(
-; CHECK-NEXT: ret half 0xH5140
+; CHECK-NEXT: ret half f0x5140
;
%r = sitofp i8 42 to half
ret half %r
diff --git a/llvm/test/Transforms/SLPVectorizer/AArch64/extracts-from-scalarizable-vector.ll b/llvm/test/Transforms/SLPVectorizer/AArch64/extracts-from-scalarizable-vector.ll
index c99dd53117e5f1..45cd7c025f17e3 100644
--- a/llvm/test/Transforms/SLPVectorizer/AArch64/extracts-from-scalarizable-vector.ll
+++ b/llvm/test/Transforms/SLPVectorizer/AArch64/extracts-from-scalarizable-vector.ll
@@ -5,25 +5,25 @@ define i1 @degenerate() {
; CHECK-LABEL: define i1 @degenerate() {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[TMP0:%.*]] = extractelement <4 x fp128> zeroinitializer, i32 0
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt fp128 [[TMP0]], 0xL00000000000000000000000000000000
-; CHECK-NEXT: [[CMP3:%.*]] = fcmp olt fp128 [[TMP0]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt fp128 [[TMP0]], f0x00000000000000000000000000000000
+; CHECK-NEXT: [[CMP3:%.*]] = fcmp olt fp128 [[TMP0]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND:%.*]] = and i1 [[CMP]], [[CMP3]]
; CHECK-NEXT: [[TMP1:%.*]] = extractelement <4 x fp128> zeroinitializer, i32 0
-; CHECK-NEXT: [[CMP6:%.*]] = fcmp ogt fp128 [[TMP1]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP6:%.*]] = fcmp ogt fp128 [[TMP1]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND29:%.*]] = select i1 [[OR_COND]], i1 [[CMP6]], i1 false
-; CHECK-NEXT: [[CMP10:%.*]] = fcmp olt fp128 [[TMP1]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP10:%.*]] = fcmp olt fp128 [[TMP1]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND30:%.*]] = select i1 [[OR_COND29]], i1 [[CMP10]], i1 false
; CHECK-NEXT: ret i1 [[OR_COND30]]
;
entry:
%0 = extractelement <4 x fp128> zeroinitializer, i32 0
- %cmp = fcmp ogt fp128 %0, 0xL00000000000000000000000000000000
- %cmp3 = fcmp olt fp128 %0, 0xL00000000000000000000000000000000
+ %cmp = fcmp ogt fp128 %0, f0x00000000000000000000000000000000
+ %cmp3 = fcmp olt fp128 %0, f0x00000000000000000000000000000000
%or.cond = and i1 %cmp, %cmp3
%1 = extractelement <4 x fp128> zeroinitializer, i32 0
- %cmp6 = fcmp ogt fp128 %1, 0xL00000000000000000000000000000000
+ %cmp6 = fcmp ogt fp128 %1, f0x00000000000000000000000000000000
%or.cond29 = select i1 %or.cond, i1 %cmp6, i1 false
- %cmp10 = fcmp olt fp128 %1, 0xL00000000000000000000000000000000
+ %cmp10 = fcmp olt fp128 %1, f0x00000000000000000000000000000000
%or.cond30 = select i1 %or.cond29, i1 %cmp10, i1 false
ret i1 %or.cond30
}
@@ -33,25 +33,25 @@ define i1 @with_inputs(<4 x fp128> %a) {
; CHECK-SAME: (<4 x fp128> [[A:%.*]]) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[TMP0:%.*]] = extractelement <4 x fp128> [[A]], i32 0
-; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt fp128 [[TMP0]], 0xL00000000000000000000000000000000
-; CHECK-NEXT: [[CMP3:%.*]] = fcmp olt fp128 [[TMP0]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP:%.*]] = fcmp ogt fp128 [[TMP0]], f0x00000000000000000000000000000000
+; CHECK-NEXT: [[CMP3:%.*]] = fcmp olt fp128 [[TMP0]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND:%.*]] = and i1 [[CMP]], [[CMP3]]
; CHECK-NEXT: [[TMP1:%.*]] = extractelement <4 x fp128> [[A]], i32 1
-; CHECK-NEXT: [[CMP6:%.*]] = fcmp ogt fp128 [[TMP1]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP6:%.*]] = fcmp ogt fp128 [[TMP1]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND29:%.*]] = select i1 [[OR_COND]], i1 [[CMP6]], i1 false
-; CHECK-NEXT: [[CMP10:%.*]] = fcmp olt fp128 [[TMP1]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[CMP10:%.*]] = fcmp olt fp128 [[TMP1]], f0x00000000000000000000000000000000
; CHECK-NEXT: [[OR_COND30:%.*]] = select i1 [[OR_COND29]], i1 [[CMP10]], i1 false
; CHECK-NEXT: ret i1 [[OR_COND30]]
;
entry:
%0 = extractelement <4 x fp128> %a, i32 0
- %cmp = fcmp ogt fp128 %0, 0xL00000000000000000000000000000000
- %cmp3 = fcmp olt fp128 %0, 0xL00000000000000000000000000000000
+ %cmp = fcmp ogt fp128 %0, f0x00000000000000000000000000000000
+ %cmp3 = fcmp olt fp128 %0, f0x00000000000000000000000000000000
%or.cond = and i1 %cmp, %cmp3
%1 = extractelement <4 x fp128> %a, i32 1
- %cmp6 = fcmp ogt fp128 %1, 0xL00000000000000000000000000000000
+ %cmp6 = fcmp ogt fp128 %1, f0x00000000000000000000000000000000
%or.cond29 = select i1 %or.cond, i1 %cmp6, i1 false
- %cmp10 = fcmp olt fp128 %1, 0xL00000000000000000000000000000000
+ %cmp10 = fcmp olt fp128 %1, f0x00000000000000000000000000000000
%or.cond30 = select i1 %or.cond29, i1 %cmp10, i1 false
ret i1 %or.cond30
}
diff --git a/llvm/test/Transforms/SLPVectorizer/AArch64/gather-load-128.ll b/llvm/test/Transforms/SLPVectorizer/AArch64/gather-load-128.ll
index 3f02f974e59e67..ad57cd67ef57a4 100644
--- a/llvm/test/Transforms/SLPVectorizer/AArch64/gather-load-128.ll
+++ b/llvm/test/Transforms/SLPVectorizer/AArch64/gather-load-128.ll
@@ -8,10 +8,10 @@ define void @gather_load_fp128(ptr %arg) #0 {
; CHECK-NEXT: [[LOAD1:%.*]] = load fp128, ptr [[GEP]], align 1
; CHECK-NEXT: [[LOAD2:%.*]] = load fp128, ptr null, align 1
; CHECK-NEXT: [[LOAD3:%.*]] = load fp128, ptr null, align 1
-; CHECK-NEXT: [[FCMP0:%.*]] = fcmp oeq fp128 [[LOAD0]], 0xL00000000000000000000000000000000
-; CHECK-NEXT: [[FCMP1:%.*]] = fcmp oeq fp128 [[LOAD1]], 0xL00000000000000000000000000000000
-; CHECK-NEXT: [[FCMP2:%.*]] = fcmp oeq fp128 [[LOAD2]], 0xL00000000000000000000000000000000
-; CHECK-NEXT: [[FCMP3:%.*]] = fcmp oeq fp128 [[LOAD3]], 0xL00000000000000000000000000000000
+; CHECK-NEXT: [[FCMP0:%.*]] = fcmp oeq fp128 [[LOAD0]], f0x00000000000000000000000000000000
+; CHECK-NEXT: [[FCMP1:%.*]] = fcmp oeq fp128 [[LOAD1]], f0x00000000000000000000000000000000
+; CHECK-NEXT: [[FCMP2:%.*]] = fcmp oeq fp128 [[LOAD2]], f0x00000000000000000000000000000000
+; CHECK-NEXT: [[FCMP3:%.*]] = fcmp oeq fp128 [[LOAD3]], f0x00000000000000000000000000000000
; CHECK-NEXT: ret void
;
%gep = getelementptr i8, ptr %arg, i64 16
diff --git a/llvm/test/Transforms/SLPVectorizer/AArch64/reduce-fadd.ll b/llvm/test/Transforms/SLPVectorizer/AArch64/reduce-fadd.ll
index 6dceabe1d3243b..be5d9b4b893f07 100644
--- a/llvm/test/Transforms/SLPVectorizer/AArch64/reduce-fadd.ll
+++ b/llvm/test/Transforms/SLPVectorizer/AArch64/reduce-fadd.ll
@@ -38,7 +38,7 @@ define half @reduce_fast_half4(<4 x half> %vec4) {
; CHECK-LABEL: define half @reduce_fast_half4(
; CHECK-SAME: <4 x half> [[VEC4:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> [[VEC4]])
+; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> [[VEC4]])
; CHECK-NEXT: ret half [[TMP0]]
;
entry:
@@ -85,7 +85,7 @@ define half @reduce_fast_half8(<8 x half> %vec8) {
; NOFP16-NEXT: [[ELT6:%.*]] = extractelement <8 x half> [[VEC8]], i64 6
; NOFP16-NEXT: [[ELT7:%.*]] = extractelement <8 x half> [[VEC8]], i64 7
; NOFP16-NEXT: [[TMP0:%.*]] = shufflevector <8 x half> [[VEC8]], <8 x half> poison, <4 x i32> <i32 0, i32 1, i32 2, i32 3>
-; NOFP16-NEXT: [[TMP1:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> [[TMP0]])
+; NOFP16-NEXT: [[TMP1:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> [[TMP0]])
; NOFP16-NEXT: [[OP_RDX:%.*]] = fadd fast half [[TMP1]], [[ELT4]]
; NOFP16-NEXT: [[OP_RDX1:%.*]] = fadd fast half [[ELT5]], [[ELT6]]
; NOFP16-NEXT: [[OP_RDX2:%.*]] = fadd fast half [[OP_RDX]], [[OP_RDX1]]
@@ -95,7 +95,7 @@ define half @reduce_fast_half8(<8 x half> %vec8) {
; FULLFP16-LABEL: define half @reduce_fast_half8(
; FULLFP16-SAME: <8 x half> [[VEC8:%.*]]) #[[ATTR0]] {
; FULLFP16-NEXT: [[ENTRY:.*:]]
-; FULLFP16-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> [[VEC8]])
+; FULLFP16-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> [[VEC8]])
; FULLFP16-NEXT: ret half [[TMP0]]
;
entry:
@@ -161,7 +161,7 @@ define half @reduce_fast_half16(<16 x half> %vec16) {
; CHECK-LABEL: define half @reduce_fast_half16(
; CHECK-SAME: <16 x half> [[VEC16:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> [[VEC16]])
+; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> [[VEC16]])
; CHECK-NEXT: ret half [[TMP0]]
;
entry:
@@ -784,7 +784,7 @@ define half @reduce_unordered_fast_half4(<4 x half> %vec4) {
; CHECK-LABEL: define half @reduce_unordered_fast_half4(
; CHECK-SAME: <4 x half> [[VEC4:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> [[VEC4]])
+; CHECK-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> [[VEC4]])
; CHECK-NEXT: ret half [[TMP0]]
;
entry:
diff --git a/llvm/test/Transforms/SLPVectorizer/AMDGPU/reduction.ll b/llvm/test/Transforms/SLPVectorizer/AMDGPU/reduction.ll
index b5bfdf284ca626..f937c7fae1d835 100644
--- a/llvm/test/Transforms/SLPVectorizer/AMDGPU/reduction.ll
+++ b/llvm/test/Transforms/SLPVectorizer/AMDGPU/reduction.ll
@@ -5,7 +5,7 @@
define half @reduction_half4(<4 x half> %a) {
; GCN-LABEL: @reduction_half4(
; GCN-NEXT: entry:
-; GCN-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> [[A:%.*]])
+; GCN-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> [[A:%.*]])
; GCN-NEXT: ret half [[TMP0]]
;
entry:
@@ -24,7 +24,7 @@ entry:
define half @reduction_half8(<8 x half> %vec8) {
; GCN-LABEL: @reduction_half8(
; GCN-NEXT: entry:
-; GCN-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> [[VEC8:%.*]])
+; GCN-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> [[VEC8:%.*]])
; GCN-NEXT: ret half [[TMP0]]
;
entry:
@@ -51,15 +51,15 @@ entry:
define half @reduction_half16(<16 x half> %vec16) {
; GFX9-LABEL: @reduction_half16(
; GFX9-NEXT: entry:
-; GFX9-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v16f16(half 0xH0000, <16 x half> [[VEC16:%.*]])
+; GFX9-NEXT: [[TMP0:%.*]] = call fast half @llvm.vector.reduce.fadd.v16f16(half f0x0000, <16 x half> [[VEC16:%.*]])
; GFX9-NEXT: ret half [[TMP0]]
;
; VI-LABEL: @reduction_half16(
; VI-NEXT: entry:
; VI-NEXT: [[TMP0:%.*]] = shufflevector <16 x half> [[VEC16:%.*]], <16 x half> poison, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
-; VI-NEXT: [[TMP1:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> [[TMP0]])
+; VI-NEXT: [[TMP1:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> [[TMP0]])
; VI-NEXT: [[TMP2:%.*]] = shufflevector <16 x half> [[VEC16]], <16 x half> poison, <8 x i32> <i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
-; VI-NEXT: [[TMP3:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half 0xH0000, <8 x half> [[TMP2]])
+; VI-NEXT: [[TMP3:%.*]] = call fast half @llvm.vector.reduce.fadd.v8f16(half f0x0000, <8 x half> [[TMP2]])
; VI-NEXT: [[OP_RDX:%.*]] = fadd fast half [[TMP1]], [[TMP3]]
; VI-NEXT: ret half [[OP_RDX]]
;
diff --git a/llvm/test/Transforms/SLPVectorizer/NVPTX/v2f16.ll b/llvm/test/Transforms/SLPVectorizer/NVPTX/v2f16.ll
index 13773bf901b9bf..de797087baf9cd 100644
--- a/llvm/test/Transforms/SLPVectorizer/NVPTX/v2f16.ll
+++ b/llvm/test/Transforms/SLPVectorizer/NVPTX/v2f16.ll
@@ -11,8 +11,8 @@ define void @fusion(ptr noalias nocapture align 256 dereferenceable(19267584) %a
; CHECK-NEXT: [[TMP11:%.*]] = getelementptr inbounds half, ptr [[ARG1:%.*]], i64 [[TMP6]]
; CHECK-NEXT: [[TMP16:%.*]] = getelementptr inbounds half, ptr [[ARG:%.*]], i64 [[TMP6]]
; CHECK-NEXT: [[TMP1:%.*]] = load <2 x half>, ptr [[TMP11]], align 8
-; CHECK-NEXT: [[TMP2:%.*]] = fmul fast <2 x half> [[TMP1]], splat (half 0xH5380)
-; CHECK-NEXT: [[TMP3:%.*]] = fadd fast <2 x half> [[TMP2]], splat (half 0xH57F0)
+; CHECK-NEXT: [[TMP2:%.*]] = fmul fast <2 x half> [[TMP1]], splat (half f0x5380)
+; CHECK-NEXT: [[TMP3:%.*]] = fadd fast <2 x half> [[TMP2]], splat (half f0x57F0)
; CHECK-NEXT: store <2 x half> [[TMP3]], ptr [[TMP16]], align 8
; CHECK-NEXT: ret void
;
@@ -24,14 +24,14 @@ define void @fusion(ptr noalias nocapture align 256 dereferenceable(19267584) %a
; NOVECTOR-NEXT: [[TMP7:%.*]] = or disjoint i64 [[TMP6]], 1
; NOVECTOR-NEXT: [[TMP11:%.*]] = getelementptr inbounds half, ptr [[ARG1:%.*]], i64 [[TMP6]]
; NOVECTOR-NEXT: [[TMP12:%.*]] = load half, ptr [[TMP11]], align 8
-; NOVECTOR-NEXT: [[TMP13:%.*]] = fmul fast half [[TMP12]], 0xH5380
-; NOVECTOR-NEXT: [[TMP14:%.*]] = fadd fast half [[TMP13]], 0xH57F0
+; NOVECTOR-NEXT: [[TMP13:%.*]] = fmul fast half [[TMP12]], f0x5380
+; NOVECTOR-NEXT: [[TMP14:%.*]] = fadd fast half [[TMP13]], f0x57F0
; NOVECTOR-NEXT: [[TMP16:%.*]] = getelementptr inbounds half, ptr [[ARG:%.*]], i64 [[TMP6]]
; NOVECTOR-NEXT: store half [[TMP14]], ptr [[TMP16]], align 8
; NOVECTOR-NEXT: [[TMP17:%.*]] = getelementptr inbounds half, ptr [[ARG1]], i64 [[TMP7]]
; NOVECTOR-NEXT: [[TMP18:%.*]] = load half, ptr [[TMP17]], align 2
-; NOVECTOR-NEXT: [[TMP19:%.*]] = fmul fast half [[TMP18]], 0xH5380
-; NOVECTOR-NEXT: [[TMP20:%.*]] = fadd fast half [[TMP19]], 0xH57F0
+; NOVECTOR-NEXT: [[TMP19:%.*]] = fmul fast half [[TMP18]], f0x5380
+; NOVECTOR-NEXT: [[TMP20:%.*]] = fadd fast half [[TMP19]], f0x57F0
; NOVECTOR-NEXT: [[TMP21:%.*]] = getelementptr inbounds half, ptr [[ARG]], i64 [[TMP7]]
; NOVECTOR-NEXT: store half [[TMP20]], ptr [[TMP21]], align 2
; NOVECTOR-NEXT: ret void
@@ -43,14 +43,14 @@ define void @fusion(ptr noalias nocapture align 256 dereferenceable(19267584) %a
%tmp7 = or disjoint i64 %tmp6, 1
%tmp11 = getelementptr inbounds half, ptr %arg1, i64 %tmp6
%tmp12 = load half, ptr %tmp11, align 8
- %tmp13 = fmul fast half %tmp12, 0xH5380
- %tmp14 = fadd fast half %tmp13, 0xH57F0
+ %tmp13 = fmul fast half %tmp12, f0x5380
+ %tmp14 = fadd fast half %tmp13, f0x57F0
%tmp16 = getelementptr inbounds half, ptr %arg, i64 %tmp6
store half %tmp14, ptr %tmp16, align 8
%tmp17 = getelementptr inbounds half, ptr %arg1, i64 %tmp7
%tmp18 = load half, ptr %tmp17, align 2
- %tmp19 = fmul fast half %tmp18, 0xH5380
- %tmp20 = fadd fast half %tmp19, 0xH57F0
+ %tmp19 = fmul fast half %tmp18, f0x5380
+ %tmp20 = fadd fast half %tmp19, f0x57F0
%tmp21 = getelementptr inbounds half, ptr %arg, i64 %tmp7
store half %tmp20, ptr %tmp21, align 2
ret void
diff --git a/llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll b/llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll
index 85131758853b3d..98a9e8e747bc5d 100644
--- a/llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll
+++ b/llvm/test/Transforms/SLPVectorizer/RISCV/reductions.ll
@@ -1238,7 +1238,7 @@ define half @fadd_4xf16(ptr %p) {
;
; ZVFH-LABEL: @fadd_4xf16(
; ZVFH-NEXT: [[TMP1:%.*]] = load <4 x half>, ptr [[P:%.*]], align 2
-; ZVFH-NEXT: [[TMP2:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half 0xH0000, <4 x half> [[TMP1]])
+; ZVFH-NEXT: [[TMP2:%.*]] = call fast half @llvm.vector.reduce.fadd.v4f16(half f0x0000, <4 x half> [[TMP1]])
; ZVFH-NEXT: ret half [[TMP2]]
;
%x0 = load half, ptr %p
diff --git a/llvm/test/Transforms/SLPVectorizer/RISCV/strided-unsupported-type.ll b/llvm/test/Transforms/SLPVectorizer/RISCV/strided-unsupported-type.ll
index c0e1ab56c110bf..e1630f98366bb6 100644
--- a/llvm/test/Transforms/SLPVectorizer/RISCV/strided-unsupported-type.ll
+++ b/llvm/test/Transforms/SLPVectorizer/RISCV/strided-unsupported-type.ll
@@ -14,12 +14,12 @@ define void @loads() {
entry:
%_M_value.imagp.i266 = getelementptr { fp128, fp128 }, ptr null, i64 0, i32 1
%0 = load fp128, ptr null, align 16
- %cmp.i382 = fcmp une fp128 %0, 0xL00000000000000000000000000000000
+ %cmp.i382 = fcmp une fp128 %0, f0x00000000000000000000000000000000
%1 = load fp128, ptr %_M_value.imagp.i266, align 16
- %cmp4.i385 = fcmp une fp128 %1, 0xL00000000000000000000000000000000
+ %cmp4.i385 = fcmp une fp128 %1, f0x00000000000000000000000000000000
call void null(i32 0, ptr null, i32 0)
- %cmp.i386 = fcmp une fp128 %0, 0xL00000000000000000000000000000000
- %cmp2.i388 = fcmp une fp128 %1, 0xL00000000000000000000000000000000
+ %cmp.i386 = fcmp une fp128 %0, f0x00000000000000000000000000000000
+ %cmp2.i388 = fcmp une fp128 %1, f0x00000000000000000000000000000000
ret void
}
diff --git a/llvm/test/Transforms/SLPVectorizer/X86/fabs-cost-softfp.ll b/llvm/test/Transforms/SLPVectorizer/X86/fabs-cost-softfp.ll
index f7bba85a8c4857..5dbbbff92a218e 100644
--- a/llvm/test/Transforms/SLPVectorizer/X86/fabs-cost-softfp.ll
+++ b/llvm/test/Transforms/SLPVectorizer/X86/fabs-cost-softfp.ll
@@ -14,7 +14,7 @@ define void @vectorize_fp128(fp128 %c, fp128 %d) #0 {
; CHECK-NEXT: [[TMP0:%.*]] = insertelement <2 x fp128> poison, fp128 [[C:%.*]], i32 0
; CHECK-NEXT: [[TMP1:%.*]] = insertelement <2 x fp128> [[TMP0]], fp128 [[D:%.*]], i32 1
; CHECK-NEXT: [[TMP2:%.*]] = call <2 x fp128> @llvm.fabs.v2f128(<2 x fp128> [[TMP1]])
-; CHECK-NEXT: [[TMP3:%.*]] = fcmp oeq <2 x fp128> [[TMP2]], splat (fp128 0xL00000000000000007FFF000000000000)
+; CHECK-NEXT: [[TMP3:%.*]] = fcmp oeq <2 x fp128> [[TMP2]], splat (fp128 f0x7FFF0000000000000000000000000000)
; CHECK-NEXT: [[TMP4:%.*]] = extractelement <2 x i1> [[TMP3]], i32 0
; CHECK-NEXT: [[TMP5:%.*]] = extractelement <2 x i1> [[TMP3]], i32 1
; CHECK-NEXT: [[OR_COND39:%.*]] = or i1 [[TMP4]], [[TMP5]]
@@ -26,9 +26,9 @@ define void @vectorize_fp128(fp128 %c, fp128 %d) #0 {
;
entry:
%0 = tail call fp128 @llvm.fabs.f128(fp128 %c)
- %cmpinf10 = fcmp oeq fp128 %0, 0xL00000000000000007FFF000000000000
+ %cmpinf10 = fcmp oeq fp128 %0, f0x7FFF0000000000000000000000000000
%1 = tail call fp128 @llvm.fabs.f128(fp128 %d)
- %cmpinf12 = fcmp oeq fp128 %1, 0xL00000000000000007FFF000000000000
+ %cmpinf12 = fcmp oeq fp128 %1, f0x7FFF0000000000000000000000000000
%or.cond39 = or i1 %cmpinf10, %cmpinf12
br i1 %or.cond39, label %if.then13, label %if.end24
diff --git a/llvm/test/Transforms/SLPVectorizer/scalarazied-result.ll b/llvm/test/Transforms/SLPVectorizer/scalarazied-result.ll
index 2570cdb45e1e78..48948e61064c0d 100644
--- a/llvm/test/Transforms/SLPVectorizer/scalarazied-result.ll
+++ b/llvm/test/Transforms/SLPVectorizer/scalarazied-result.ll
@@ -9,8 +9,8 @@ define void @test() {
;
entry:
%0 = extractelement <8 x half> zeroinitializer, i64 1
- %tobool = fcmp une half %0, 0xH0000
+ %tobool = fcmp une half %0, f0x0000
%1 = extractelement <8 x half> zeroinitializer, i64 1
- %tobool3 = fcmp une half %1, 0xH0000
+ %tobool3 = fcmp une half %1, f0x0000
ret void
}
diff --git a/llvm/test/Transforms/SROA/ppcf128-no-fold.ll b/llvm/test/Transforms/SROA/ppcf128-no-fold.ll
index f5804ee3557a0f..c69179c4e9e1f2 100644
--- a/llvm/test/Transforms/SROA/ppcf128-no-fold.ll
+++ b/llvm/test/Transforms/SROA/ppcf128-no-fold.ll
@@ -10,9 +10,9 @@ declare void @bar(ptr, [2 x i128])
define void @foo(ptr %v) #0 {
; CHECK-LABEL: @foo(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast ppc_fp128 0xM403B0000000000000000000000000000 to i128
+; CHECK-NEXT: [[TMP0:%.*]] = bitcast ppc_fp128 f0x0000000000000000403B000000000000 to i128
; CHECK-NEXT: [[DOTFCA_0_INSERT:%.*]] = insertvalue [2 x i128] poison, i128 [[TMP0]], 0
-; CHECK-NEXT: [[TMP1:%.*]] = bitcast ppc_fp128 0xM4093B400000000000000000000000000 to i128
+; CHECK-NEXT: [[TMP1:%.*]] = bitcast ppc_fp128 f0x00000000000000004093B40000000000 to i128
; CHECK-NEXT: [[DOTFCA_1_INSERT:%.*]] = insertvalue [2 x i128] [[DOTFCA_0_INSERT]], i128 [[TMP1]], 1
; CHECK-NEXT: call void @bar(ptr [[V:%.*]], [2 x i128] [[DOTFCA_1_INSERT]])
; CHECK-NEXT: ret void
@@ -21,9 +21,9 @@ entry:
%v.addr = alloca ptr, align 8
%z = alloca %struct.ld2, align 16
store ptr %v, ptr %v.addr, align 8
- store ppc_fp128 0xM403B0000000000000000000000000000, ptr %z, align 16
+ store ppc_fp128 f0x0000000000000000403B000000000000, ptr %z, align 16
%arrayidx2 = getelementptr inbounds [2 x ppc_fp128], ptr %z, i32 0, i64 1
- store ppc_fp128 0xM4093B400000000000000000000000000, ptr %arrayidx2, align 16
+ store ppc_fp128 f0x00000000000000004093B40000000000, ptr %arrayidx2, align 16
%0 = load ptr, ptr %v.addr, align 8
%1 = load [2 x i128], ptr %z, align 1
call void @bar(ptr %0, [2 x i128] %1)
diff --git a/llvm/test/Transforms/SROA/select-load.ll b/llvm/test/Transforms/SROA/select-load.ll
index 9de765071b535b..90d3cf291c027a 100644
--- a/llvm/test/Transforms/SROA/select-load.ll
+++ b/llvm/test/Transforms/SROA/select-load.ll
@@ -9,12 +9,12 @@
define <2 x i16> @test_load_bitcast_select(i1 %cond1, i1 %cond2) {
; CHECK-LABEL: @test_load_bitcast_select(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast half 0xHFFFF to i16
-; CHECK-NEXT: [[TMP1:%.*]] = bitcast half 0xH0000 to i16
+; CHECK-NEXT: [[TMP0:%.*]] = bitcast half f0xFFFF to i16
+; CHECK-NEXT: [[TMP1:%.*]] = bitcast half f0x0000 to i16
; CHECK-NEXT: [[LD1_SROA_SPECULATED:%.*]] = select i1 [[COND1:%.*]], i16 [[TMP0]], i16 [[TMP1]]
; CHECK-NEXT: [[V1:%.*]] = insertelement <2 x i16> poison, i16 [[LD1_SROA_SPECULATED]], i32 0
-; CHECK-NEXT: [[TMP2:%.*]] = bitcast half 0xHFFFF to i16
-; CHECK-NEXT: [[TMP3:%.*]] = bitcast half 0xH0000 to i16
+; CHECK-NEXT: [[TMP2:%.*]] = bitcast half f0xFFFF to i16
+; CHECK-NEXT: [[TMP3:%.*]] = bitcast half f0x0000 to i16
; CHECK-NEXT: [[LD2_SROA_SPECULATED:%.*]] = select i1 [[COND2:%.*]], i16 [[TMP2]], i16 [[TMP3]]
; CHECK-NEXT: [[V2:%.*]] = insertelement <2 x i16> [[V1]], i16 [[LD2_SROA_SPECULATED]], i32 1
; CHECK-NEXT: ret <2 x i16> [[V2]]
@@ -22,8 +22,8 @@ define <2 x i16> @test_load_bitcast_select(i1 %cond1, i1 %cond2) {
entry:
%true = alloca half, align 2
%false = alloca half, align 2
- store half 0xHFFFF, ptr %true, align 2
- store half 0xH0000, ptr %false, align 2
+ store half f0xFFFF, ptr %true, align 2
+ store half f0x0000, ptr %false, align 2
%sel1 = select i1 %cond1, ptr %true, ptr %false
%ld1 = load i16, ptr %sel1, align 2
%v1 = insertelement <2 x i16> poison, i16 %ld1, i32 0
diff --git a/llvm/test/Transforms/Scalarizer/min-bits.ll b/llvm/test/Transforms/Scalarizer/min-bits.ll
index 97cc71626e2084..58aa4ac0c35eef 100644
--- a/llvm/test/Transforms/Scalarizer/min-bits.ll
+++ b/llvm/test/Transforms/Scalarizer/min-bits.ll
@@ -799,8 +799,8 @@ define void @phi_v2f16(ptr %base, i64 %bound) {
; MIN16-NEXT: [[BASE_I1:%.*]] = getelementptr half, ptr [[BASE:%.*]], i32 1
; MIN16-NEXT: br label [[LOOP:%.*]]
; MIN16: loop:
-; MIN16-NEXT: [[X_I0:%.*]] = phi half [ 0xH0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I1:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I0:%.*]] = phi half [ f0x0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I1:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[IDX:%.*]] = phi i64 [ 0, [[ENTRY]] ], [ [[IDX_NEXT:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[P:%.*]] = getelementptr <2 x half>, ptr [[BASE]], i64 [[IDX]]
; MIN16-NEXT: [[A_I0:%.*]] = load half, ptr [[P]], align 2
@@ -857,9 +857,9 @@ define void @phi_v3f16(ptr %base, i64 %bound) {
; MIN16-NEXT: [[BASE_I2:%.*]] = getelementptr half, ptr [[BASE]], i32 2
; MIN16-NEXT: br label [[LOOP:%.*]]
; MIN16: loop:
-; MIN16-NEXT: [[X_I0:%.*]] = phi half [ 0xH0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I1:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I2:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I2:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I0:%.*]] = phi half [ f0x0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I1:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I2:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I2:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[IDX:%.*]] = phi i64 [ 0, [[ENTRY]] ], [ [[IDX_NEXT:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[P:%.*]] = getelementptr <3 x half>, ptr [[BASE]], i64 [[IDX]]
; MIN16-NEXT: [[A_I0:%.*]] = load half, ptr [[P]], align 2
@@ -885,7 +885,7 @@ define void @phi_v3f16(ptr %base, i64 %bound) {
; MIN32-NEXT: br label [[LOOP:%.*]]
; MIN32: loop:
; MIN32-NEXT: [[X_I0:%.*]] = phi <2 x half> [ zeroinitializer, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
-; MIN32-NEXT: [[X_I1:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
+; MIN32-NEXT: [[X_I1:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
; MIN32-NEXT: [[IDX:%.*]] = phi i64 [ 0, [[ENTRY]] ], [ [[IDX_NEXT:%.*]], [[LOOP]] ]
; MIN32-NEXT: [[P:%.*]] = getelementptr <3 x half>, ptr [[BASE]], i64 [[IDX]]
; MIN32-NEXT: [[A_I0:%.*]] = load <2 x half>, ptr [[P]], align 2
@@ -927,10 +927,10 @@ define void @phi_v4f16(ptr %base, i64 %bound) {
; MIN16-NEXT: [[BASE_I3:%.*]] = getelementptr half, ptr [[BASE]], i32 3
; MIN16-NEXT: br label [[LOOP:%.*]]
; MIN16: loop:
-; MIN16-NEXT: [[X_I0:%.*]] = phi half [ 0xH0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I1:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I2:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I2:%.*]], [[LOOP]] ]
-; MIN16-NEXT: [[X_I3:%.*]] = phi half [ 0xH0000, [[ENTRY]] ], [ [[X_NEXT_I3:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I0:%.*]] = phi half [ f0x0000, [[ENTRY:%.*]] ], [ [[X_NEXT_I0:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I1:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I1:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I2:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I2:%.*]], [[LOOP]] ]
+; MIN16-NEXT: [[X_I3:%.*]] = phi half [ f0x0000, [[ENTRY]] ], [ [[X_NEXT_I3:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[IDX:%.*]] = phi i64 [ 0, [[ENTRY]] ], [ [[IDX_NEXT:%.*]], [[LOOP]] ]
; MIN16-NEXT: [[P:%.*]] = getelementptr <4 x half>, ptr [[BASE]], i64 [[IDX]]
; MIN16-NEXT: [[A_I0:%.*]] = load half, ptr [[P]], align 2
diff --git a/llvm/test/Transforms/TypePromotion/AArch64/bitcast.ll b/llvm/test/Transforms/TypePromotion/AArch64/bitcast.ll
index 883674a0f64e06..1da0314e714aab 100644
--- a/llvm/test/Transforms/TypePromotion/AArch64/bitcast.ll
+++ b/llvm/test/Transforms/TypePromotion/AArch64/bitcast.ll
@@ -22,12 +22,12 @@ entry:
define i1 @halfbitcast() {
; CHECK-LABEL: define i1 @halfbitcast() {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast half 0xH8000 to i16
+; CHECK-NEXT: [[TMP0:%.*]] = bitcast half f0x8000 to i16
; CHECK-NEXT: [[DOTNOT114:%.*]] = icmp eq i16 [[TMP0]], 0
; CHECK-NEXT: ret i1 [[DOTNOT114]]
;
entry:
- %0 = bitcast half 0xH8000 to i16
+ %0 = bitcast half f0x8000 to i16
%.not114 = icmp eq i16 %0, 0
ret i1 %.not114
}
diff --git a/llvm/test/Transforms/Util/libcalls-shrinkwrap-long-double.ll b/llvm/test/Transforms/Util/libcalls-shrinkwrap-long-double.ll
index c2b981c81c75d7..9788a96f4ab658 100644
--- a/llvm/test/Transforms/Util/libcalls-shrinkwrap-long-double.ll
+++ b/llvm/test/Transforms/Util/libcalls-shrinkwrap-long-double.ll
@@ -6,8 +6,8 @@ target triple = "x86_64-unknown-linux-gnu"
define void @test_range_error(x86_fp80 %value) {
entry:
%call_0 = call x86_fp80 @coshl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKC00CB174000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK400CB174000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xC00CB174000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x400CB174000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT:[0-9]+]]
; CHECK: [[CALL_LABEL]]:
@@ -16,8 +16,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_1 = call x86_fp80 @expl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKC00CB21C000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK400CB170000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xC00CB21C000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x400CB170000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -26,8 +26,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_3 = call x86_fp80 @exp2l(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKC00D807A000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK400CB1DC000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xC00D807A000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x400CB1DC000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -36,8 +36,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_4 = call x86_fp80 @sinhl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKC00CB174000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK400CB174000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xC00CB174000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x400CB174000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -46,7 +46,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_5 = call x86_fp80 @expm1l(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK400CB170000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x400CB170000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_5 = call x86_fp80 @expm1l(x86_fp80 %value)
@@ -59,8 +59,8 @@ entry:
define void @test_range_error_strictfp(x86_fp80 %value) strictfp {
entry:
%call_0 = call x86_fp80 @coshl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE:%.*]], x86_fp80 0xKC00CB174000000000000, metadata !"olt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK400CB174000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE:%.*]], x86_fp80 f0xC00CB174000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x400CB174000000000000, metadata !"ogt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT:[0-9]+]]
; CHECK: [[CALL_LABEL]]:
@@ -69,8 +69,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_1 = call x86_fp80 @expl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKC00CB21C000000000000, metadata !"olt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK400CB170000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xC00CB21C000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x400CB170000000000000, metadata !"ogt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -79,8 +79,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_3 = call x86_fp80 @exp2l(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKC00D807A000000000000, metadata !"olt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK400CB1DC000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xC00D807A000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x400CB1DC000000000000, metadata !"ogt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -89,8 +89,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_4 = call x86_fp80 @sinhl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKC00CB174000000000000, metadata !"olt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK400CB174000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xC00CB174000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x400CB174000000000000, metadata !"ogt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -99,7 +99,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_5 = call x86_fp80 @expm1l(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK400CB170000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x400CB170000000000000, metadata !"ogt", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_5 = call x86_fp80 @expm1l(x86_fp80 %value)
@@ -119,8 +119,8 @@ declare x86_fp80 @expm1l(x86_fp80)
define void @test_domain_error(x86_fp80 %value) {
entry:
%call_00 = call x86_fp80 @acosl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK3FFF8000000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKBFFF8000000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x3FFF8000000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xBFFF8000000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -129,8 +129,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_01 = call x86_fp80 @asinl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp ogt x86_fp80 %value, 0xK3FFF8000000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xKBFFF8000000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp ogt x86_fp80 %value, f0x3FFF8000000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp olt x86_fp80 %value, f0xBFFF8000000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -139,8 +139,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_02 = call x86_fp80 @cosl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp oeq x86_fp80 %value, 0xKFFFF8000000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp oeq x86_fp80 %value, 0xK7FFF8000000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp oeq x86_fp80 %value, f0xFFFF8000000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp oeq x86_fp80 %value, f0x7FFF8000000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -149,8 +149,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_03 = call x86_fp80 @sinl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp oeq x86_fp80 %value, 0xKFFFF8000000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp oeq x86_fp80 %value, 0xK7FFF8000000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp oeq x86_fp80 %value, f0xFFFF8000000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp oeq x86_fp80 %value, f0x7FFF8000000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -159,7 +159,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_04 = call x86_fp80 @acoshl(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xK3FFF8000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp olt x86_fp80 %value, f0x3FFF8000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_04 = call x86_fp80 @acoshl(x86_fp80 %value)
@@ -167,7 +167,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_05 = call x86_fp80 @sqrtl(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp olt x86_fp80 %value, 0xK00000000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp olt x86_fp80 %value, f0x00000000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_05 = call x86_fp80 @sqrtl(x86_fp80 %value)
@@ -175,8 +175,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_06 = call x86_fp80 @atanhl(x86_fp80 %value)
-; CHECK: [[COND1:%[0-9]+]] = fcmp oge x86_fp80 %value, 0xK3FFF8000000000000000
-; CHECK: [[COND2:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xKBFFF8000000000000000
+; CHECK: [[COND1:%[0-9]+]] = fcmp oge x86_fp80 %value, f0x3FFF8000000000000000
+; CHECK: [[COND2:%[0-9]+]] = fcmp ole x86_fp80 %value, f0xBFFF8000000000000000
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -185,7 +185,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_07 = call x86_fp80 @logl(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xK00000000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, f0x00000000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_07 = call x86_fp80 @logl(x86_fp80 %value)
@@ -193,7 +193,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_08 = call x86_fp80 @log10l(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xK00000000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, f0x00000000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_08 = call x86_fp80 @log10l(x86_fp80 %value)
@@ -201,7 +201,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_09 = call x86_fp80 @log2l(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xK00000000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, f0x00000000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_09 = call x86_fp80 @log2l(x86_fp80 %value)
@@ -209,7 +209,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_10 = call x86_fp80 @logbl(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xK00000000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, f0x00000000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_10 = call x86_fp80 @logbl(x86_fp80 %value)
@@ -217,7 +217,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_11 = call x86_fp80 @log1pl(x86_fp80 %value)
-; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, 0xKBFFF8000000000000000
+; CHECK: [[COND:%[0-9]+]] = fcmp ole x86_fp80 %value, f0xBFFF8000000000000000
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_11 = call x86_fp80 @log1pl(x86_fp80 %value)
@@ -230,8 +230,8 @@ entry:
define void @test_domain_error_strictfp(x86_fp80 %value) strictfp {
entry:
%call_00 = call x86_fp80 @acosl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE:%.*]], x86_fp80 0xK3FFF8000000000000000, metadata !"ogt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKBFFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE:%.*]], x86_fp80 f0x3FFF8000000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xBFFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -240,8 +240,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_01 = call x86_fp80 @asinl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK3FFF8000000000000000, metadata !"ogt", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKBFFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x3FFF8000000000000000, metadata !"ogt", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xBFFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -250,8 +250,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_02 = call x86_fp80 @cosl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKFFFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK7FFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xFFFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x7FFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -260,8 +260,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_03 = call x86_fp80 @sinl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKFFFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK7FFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xFFFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x7FFF8000000000000000, metadata !"oeq", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -270,7 +270,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_04 = call x86_fp80 @acoshl(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK3FFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x3FFF8000000000000000, metadata !"olt", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_04 = call x86_fp80 @acoshl(x86_fp80 %value)
@@ -278,7 +278,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_05 = call x86_fp80 @sqrtl(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK00000000000000000000, metadata !"olt", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x00000000000000000000, metadata !"olt", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_05 = call x86_fp80 @sqrtl(x86_fp80 %value)
@@ -286,8 +286,8 @@ entry:
; CHECK: [[END_LABEL]]:
%call_06 = call x86_fp80 @atanhl(x86_fp80 %value) strictfp
-; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK3FFF8000000000000000, metadata !"oge", metadata !"fpexcept.strict")
-; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKBFFF8000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND1:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x3FFF8000000000000000, metadata !"oge", metadata !"fpexcept.strict")
+; CHECK: [[COND2:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xBFFF8000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: [[COND:%[0-9]+]] = or i1 [[COND2]], [[COND1]]
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
@@ -296,7 +296,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_07 = call x86_fp80 @logl(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_07 = call x86_fp80 @logl(x86_fp80 %value)
@@ -304,7 +304,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_08 = call x86_fp80 @log10l(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_08 = call x86_fp80 @log10l(x86_fp80 %value)
@@ -312,7 +312,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_09 = call x86_fp80 @log2l(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_09 = call x86_fp80 @log2l(x86_fp80 %value)
@@ -320,7 +320,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_10 = call x86_fp80 @logbl(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xK00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0x00000000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_10 = call x86_fp80 @logbl(x86_fp80 %value)
@@ -328,7 +328,7 @@ entry:
; CHECK: [[END_LABEL]]:
%call_11 = call x86_fp80 @log1pl(x86_fp80 %value) strictfp
-; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 0xKBFFF8000000000000000, metadata !"ole", metadata !"fpexcept.strict")
+; CHECK: [[COND:%[0-9]+]] = call i1 @llvm.experimental.constrained.fcmp.f80(x86_fp80 [[VALUE]], x86_fp80 f0xBFFF8000000000000000, metadata !"ole", metadata !"fpexcept.strict")
; CHECK: br i1 [[COND]], label %[[CALL_LABEL:cdce.call[0-9]*]], label %[[END_LABEL:cdce.end[0-9]*]], !prof ![[BRANCH_WEIGHT]]
; CHECK: [[CALL_LABEL]]:
; CHECK-NEXT: %call_11 = call x86_fp80 @log1pl(x86_fp80 %value)
diff --git a/llvm/test/Transforms/VectorCombine/AArch64/shuffletoidentity.ll b/llvm/test/Transforms/VectorCombine/AArch64/shuffletoidentity.ll
index f4c27794d3930c..c7cfba8cc12eb1 100644
--- a/llvm/test/Transforms/VectorCombine/AArch64/shuffletoidentity.ll
+++ b/llvm/test/Transforms/VectorCombine/AArch64/shuffletoidentity.ll
@@ -339,7 +339,7 @@ define <8 x i8> @constantdiff2(<8 x i8> %a) {
define <8 x half> @constantsplatf(<8 x half> %a) {
; CHECK-LABEL: @constantsplatf(
-; CHECK-NEXT: [[R:%.*]] = fadd <8 x half> [[A:%.*]], splat (half 0xH4900)
+; CHECK-NEXT: [[R:%.*]] = fadd <8 x half> [[A:%.*]], splat (half f0x4900)
; CHECK-NEXT: ret <8 x half> [[R]]
;
%ab = shufflevector <8 x half> %a, <8 x half> poison, <4 x i32> <i32 3, i32 2, i32 1, i32 0>
@@ -1206,7 +1206,7 @@ define <16 x i32> @const_types(<16 x i32> %wide.vec, <16 x i32> %wide.vec116) {
define <32 x half> @cast_types(<32 x i16> %wide.vec) {
; CHECK-LABEL: @cast_types(
; CHECK-NEXT: [[TMP1:%.*]] = sitofp <32 x i16> [[WIDE_VEC:%.*]] to <32 x half>
-; CHECK-NEXT: [[INTERLEAVED_VEC:%.*]] = fmul fast <32 x half> [[TMP1]], splat (half 0xH0200)
+; CHECK-NEXT: [[INTERLEAVED_VEC:%.*]] = fmul fast <32 x half> [[TMP1]], splat (half f0x0200)
; CHECK-NEXT: ret <32 x half> [[INTERLEAVED_VEC]]
;
%strided.vec = shufflevector <32 x i16> %wide.vec, <32 x i16> poison, <8 x i32> <i32 0, i32 4, i32 8, i32 12, i32 16, i32 20, i32 24, i32 28>
@@ -1214,13 +1214,13 @@ define <32 x half> @cast_types(<32 x i16> %wide.vec) {
%strided.vec50 = shufflevector <32 x i16> %wide.vec, <32 x i16> poison, <8 x i32> <i32 2, i32 6, i32 10, i32 14, i32 18, i32 22, i32 26, i32 30>
%strided.vec51 = shufflevector <32 x i16> %wide.vec, <32 x i16> poison, <8 x i32> <i32 3, i32 7, i32 11, i32 15, i32 19, i32 23, i32 27, i32 31>
%5 = sitofp <8 x i16> %strided.vec to <8 x half>
- %6 = fmul fast <8 x half> %5, splat (half 0xH0200)
+ %6 = fmul fast <8 x half> %5, splat (half f0x0200)
%7 = sitofp <8 x i16> %strided.vec49 to <8 x half>
- %8 = fmul fast <8 x half> %7, splat (half 0xH0200)
+ %8 = fmul fast <8 x half> %7, splat (half f0x0200)
%9 = sitofp <8 x i16> %strided.vec50 to <8 x half>
- %10 = fmul fast <8 x half> %9, splat (half 0xH0200)
+ %10 = fmul fast <8 x half> %9, splat (half f0x0200)
%11 = sitofp <8 x i16> %strided.vec51 to <8 x half>
- %12 = fmul fast <8 x half> %11, splat (half 0xH0200)
+ %12 = fmul fast <8 x half> %11, splat (half f0x0200)
%13 = shufflevector <8 x half> %6, <8 x half> %8, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
%14 = shufflevector <8 x half> %10, <8 x half> %12, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
%interleaved.vec = shufflevector <16 x half> %13, <16 x half> %14, <32 x i32> <i32 0, i32 8, i32 16, i32 24, i32 1, i32 9, i32 17, i32 25, i32 2, i32 10, i32 18, i32 26, i32 3, i32 11, i32 19, i32 27, i32 4, i32 12, i32 20, i32 28, i32 5, i32 13, i32 21, i32 29, i32 6, i32 14, i32 22, i32 30, i32 7, i32 15, i32 23, i32 31>
diff --git a/llvm/test/Transforms/VectorCombine/RISCV/vpintrin-scalarization.ll b/llvm/test/Transforms/VectorCombine/RISCV/vpintrin-scalarization.ll
index c1234586690882..955688324142b9 100644
--- a/llvm/test/Transforms/VectorCombine/RISCV/vpintrin-scalarization.ll
+++ b/llvm/test/Transforms/VectorCombine/RISCV/vpintrin-scalarization.ll
@@ -1527,7 +1527,7 @@ define <vscale x 8 x half> @fadd_nxv1f16_allonesmask(<vscale x 8 x half> %x, hal
; VEC-COMBINE-LABEL: @fadd_nxv1f16_allonesmask(
; VEC-COMBINE-NEXT: [[SPLAT:%.*]] = insertelement <vscale x 8 x i1> poison, i1 true, i32 0
; VEC-COMBINE-NEXT: [[MASK:%.*]] = shufflevector <vscale x 8 x i1> [[SPLAT]], <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
-; VEC-COMBINE-NEXT: [[TMP1:%.*]] = fadd half [[Y:%.*]], 0xH5140
+; VEC-COMBINE-NEXT: [[TMP1:%.*]] = fadd half [[Y:%.*]], f0x5140
; VEC-COMBINE-NEXT: [[DOTSPLATINSERT:%.*]] = insertelement <vscale x 8 x half> poison, half [[TMP1]], i64 0
; VEC-COMBINE-NEXT: [[TMP2:%.*]] = shufflevector <vscale x 8 x half> [[DOTSPLATINSERT]], <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer
; VEC-COMBINE-NEXT: [[TMP3:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[X:%.*]], <vscale x 8 x half> [[TMP2]], <vscale x 8 x i1> [[MASK]], i32 [[EVL:%.*]])
@@ -1538,7 +1538,7 @@ define <vscale x 8 x half> @fadd_nxv1f16_allonesmask(<vscale x 8 x half> %x, hal
; NO-VEC-COMBINE-NEXT: [[MASK:%.*]] = shufflevector <vscale x 8 x i1> [[SPLAT]], <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
; NO-VEC-COMBINE-NEXT: [[TMP1:%.*]] = insertelement <vscale x 8 x half> poison, half [[Y:%.*]], i64 0
; NO-VEC-COMBINE-NEXT: [[TMP2:%.*]] = shufflevector <vscale x 8 x half> [[TMP1]], <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer
-; NO-VEC-COMBINE-NEXT: [[TMP3:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[TMP2]], <vscale x 8 x half> splat (half 0xH5140), <vscale x 8 x i1> [[MASK]], i32 [[EVL:%.*]])
+; NO-VEC-COMBINE-NEXT: [[TMP3:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[TMP2]], <vscale x 8 x half> splat (half f0x5140), <vscale x 8 x i1> [[MASK]], i32 [[EVL:%.*]])
; NO-VEC-COMBINE-NEXT: [[TMP4:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[X:%.*]], <vscale x 8 x half> [[TMP3]], <vscale x 8 x i1> [[MASK]], i32 [[EVL]])
; NO-VEC-COMBINE-NEXT: ret <vscale x 8 x half> [[TMP4]]
;
@@ -1555,7 +1555,7 @@ define <vscale x 8 x half> @fadd_nxv8f16_anymask(<vscale x 8 x half> %x, half %y
; ALL-LABEL: @fadd_nxv8f16_anymask(
; ALL-NEXT: [[TMP1:%.*]] = insertelement <vscale x 8 x half> poison, half [[Y:%.*]], i64 0
; ALL-NEXT: [[TMP2:%.*]] = shufflevector <vscale x 8 x half> [[TMP1]], <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer
-; ALL-NEXT: [[TMP3:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[TMP2]], <vscale x 8 x half> splat (half 0xH5140), <vscale x 8 x i1> [[MASK:%.*]], i32 [[EVL:%.*]])
+; ALL-NEXT: [[TMP3:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[TMP2]], <vscale x 8 x half> splat (half f0x5140), <vscale x 8 x i1> [[MASK:%.*]], i32 [[EVL:%.*]])
; ALL-NEXT: [[TMP4:%.*]] = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> [[X:%.*]], <vscale x 8 x half> [[TMP3]], <vscale x 8 x i1> [[MASK]], i32 [[EVL]])
; ALL-NEXT: ret <vscale x 8 x half> [[TMP4]]
;
diff --git a/llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll b/llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll
index cb99632c287b38..6b1396cabd4b5c 100644
--- a/llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll
+++ b/llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll
@@ -89,22 +89,22 @@ declare void @llvm.amdgcn.exp.compr.v2f16(i32, i32, <2 x half>, <2 x half>, i1,
define void @exp_compr_invalid_inputs(i32 %tgt, i32 %en, i1 %bool) {
; CHECK: immarg operand has non-immediate parameter
; CHECK-NEXT: i32 %en
- ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 %en, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> <half 0xH3800, half 0xH4400>, i1 true, i1 false)
+ ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 %en, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> <half f0x3800, half f0x4400>, i1 true, i1 false)
call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 %en, <2 x half> <half 1.0, half 2.0>, <2 x half> <half 0.5, half 4.0>, i1 true, i1 false)
; CHECK: immarg operand has non-immediate parameter
; CHECK-NEXT: i32 %tgt
- ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 %tgt, i32 5, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> <half 0xH3800, half 0xH4400>, i1 true, i1 false)
+ ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 %tgt, i32 5, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> <half f0x3800, half f0x4400>, i1 true, i1 false)
call void @llvm.amdgcn.exp.compr.v2f16(i32 %tgt, i32 5, <2 x half> <half 1.0, half 2.0>, <2 x half> <half 0.5, half 4.0>, i1 true, i1 false)
; CHECK: immarg operand has non-immediate parameter
; CHECK-NEXT: i1 %bool
- ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> <half 0xH3800, half 0xH4400>, i1 %bool, i1 false)
+ ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> <half f0x3800, half f0x4400>, i1 %bool, i1 false)
call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half 1.0, half 2.0>, <2 x half> <half 0.5, half 4.0>, i1 %bool, i1 false)
; CHECK: immarg operand has non-immediate parameter
; CHECK-NEXT: i1 %bool
- ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half 0xH3C00, half 0xH4000>, <2 x half> <half 0xH3800, half 0xH4400>, i1 false, i1 %bool)
+ ; CHECK-NEXT: call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half f0x3C00, half f0x4000>, <2 x half> <half f0x3800, half f0x4400>, i1 false, i1 %bool)
call void @llvm.amdgcn.exp.compr.v2f16(i32 0, i32 5, <2 x half> <half 1.0, half 2.0>, <2 x half> <half 0.5, half 4.0>, i1 false, i1 %bool)
ret void
}
More information about the llvm-commits
mailing list