[clang] [llvm] [X86][CodeGen] - Use shift operators for const value shifts, instead of built-ins for SSE emulation of MMX intrinsics. (PR #129197)
Evgenii Kudriashov via llvm-commits
llvm-commits at lists.llvm.org
Fri Feb 28 05:07:13 PST 2025
================
@@ -1115,11 +1115,11 @@ _mm_srl_si64(__m64 __m, __m64 __count)
/// \param __count
/// A 32-bit integer value.
/// \returns A 64-bit integer vector containing the right-shifted value.
-static __inline__ __m64 __DEFAULT_FN_ATTRS_SSE2
-_mm_srli_si64(__m64 __m, int __count)
-{
- return __trunc64(__builtin_ia32_psrlqi128((__v2di)__anyext128(__m),
- __count));
+static __inline__ __m64 __DEFAULT_FN_ATTRS_SSE2 _mm_srli_si64(__m64 __m,
+ int __count) {
+ if (__builtin_constant_p(__count))
+ return (__m64)((__count > 63) ? 0 : ((long long)__m >> __count));
+ return __trunc64(__builtin_ia32_psrlqi128((__v2di)__anyext128(__m), __count));
----------------
e-kud wrote:
I'd like to notice that we change the behavior for negative shifts. Before this change we returned a zero, now we return it as is because shift on negative number is UB. Intrinsic description doesn't specify what should happen in case of negative `__count`
https://github.com/llvm/llvm-project/pull/129197
More information about the llvm-commits
mailing list