[clang] [llvm] Clang: convert `__m64` intrinsics to unconditionally use SSE2 instead of MMX. (PR #96540)

James Y Knight via cfe-commits cfe-commits at lists.llvm.org
Tue Jun 25 12:52:48 PDT 2024


================
@@ -177,7 +175,10 @@ _mm_abs_epi32(__m128i __a)
 /// \returns A 64-bit integer vector containing the concatenated right-shifted
 ///    value.
 #define _mm_alignr_pi8(a, b, n) \
-  ((__m64)__builtin_ia32_palignr((__v8qi)(__m64)(a), (__v8qi)(__m64)(b), (n)))
+  ((__m64)__builtin_shufflevector(                                       \
----------------
jyknight wrote:

Cannot use it in a macro for external use, because __trunc64 is undefed at the end of the header.

https://github.com/llvm/llvm-project/pull/96540


More information about the cfe-commits mailing list