[clang] [llvm] Clang: convert `__m64` intrinsics to unconditionally use SSE2 instead of MMX. (PR #96540)
James Y Knight via cfe-commits
cfe-commits at lists.llvm.org
Tue Jun 25 12:52:43 PDT 2024
================
@@ -2108,9 +2106,8 @@ static __inline__ __m128i __DEFAULT_FN_ATTRS _mm_add_epi32(__m128i __a,
/// \param __b
/// A 64-bit integer.
/// \returns A 64-bit integer containing the sum of both parameters.
-static __inline__ __m64 __DEFAULT_FN_ATTRS_MMX _mm_add_si64(__m64 __a,
- __m64 __b) {
- return (__m64)__builtin_ia32_paddq((__v1di)__a, (__v1di)__b);
+static __inline__ __m64 __DEFAULT_FN_ATTRS _mm_add_si64(__m64 __a, __m64 __b) {
+ return (__m64)(((unsigned long long)__a) + ((unsigned long long)__b));
----------------
jyknight wrote:
This must use an unsigned add, not a signed add, because wraparound must be defined-behavior.
The same rationale applies to the other addition/subtraction cases you've noted in various places below, so I'm going to just mark those conversations as resolved instead of responding to each separately.
https://github.com/llvm/llvm-project/pull/96540
More information about the cfe-commits
mailing list