[llvm-branch-commits] [libcxx] release/19.x: [libc++] Increase atomic_ref's required alignment for small types (#99654) (PR #101492)
Tobias Hieta via llvm-branch-commits
llvm-branch-commits at lists.llvm.org
Fri Aug 2 00:25:28 PDT 2024
https://github.com/tru updated https://github.com/llvm/llvm-project/pull/101492
>From 39e8e7797ae868e82b4184fbadf4572ff9bd3aa3 Mon Sep 17 00:00:00 2001
From: Damien L-G <dalg24 at gmail.com>
Date: Thu, 1 Aug 2024 10:39:27 -0400
Subject: [PATCH] [libc++] Increase atomic_ref's required alignment for small
types (#99654)
This patch increases the alignment requirement for std::atomic_ref
such that we can guarantee lockfree operations more often. Specifically,
we require types that are 1, 2, 4, 8, or 16 bytes in size to be aligned
to at least their size to be used with std::atomic_ref.
This is the case for most types, however a notable exception is
`long long` on x86, which is 8 bytes in length but has an alignment
of 4.
As a result of this patch, one has to be more careful about the
alignment of objects used with std::atomic_ref. Failure to provide
a properly-aligned object to std::atomic_ref is a precondition
violation and is technically UB. On the flipside, this allows us
to provide an atomic_ref that is actually lockfree more often,
which is an important QOI property.
More information in the discussion at https://github.com/llvm/llvm-project/pull/99570#issuecomment-2237668661.
Co-authored-by: Louis Dionne <ldionne.2 at gmail.com>
(cherry picked from commit 59ca618e3b7aec8c32e24d781bae436dc99b2727)
---
libcxx/include/__atomic/atomic_ref.h | 17 +++++++++++------
.../atomics.ref/is_always_lock_free.pass.cpp | 2 +-
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/libcxx/include/__atomic/atomic_ref.h b/libcxx/include/__atomic/atomic_ref.h
index 2849b82e1a3dd..b0180a37ab500 100644
--- a/libcxx/include/__atomic/atomic_ref.h
+++ b/libcxx/include/__atomic/atomic_ref.h
@@ -57,11 +57,6 @@ struct __get_aligner_instance {
template <class _Tp>
struct __atomic_ref_base {
-protected:
- _Tp* __ptr_;
-
- _LIBCPP_HIDE_FROM_ABI __atomic_ref_base(_Tp& __obj) : __ptr_(std::addressof(__obj)) {}
-
private:
_LIBCPP_HIDE_FROM_ABI static _Tp* __clear_padding(_Tp& __val) noexcept {
_Tp* __ptr = std::addressof(__val);
@@ -108,10 +103,14 @@ struct __atomic_ref_base {
friend struct __atomic_waitable_traits<__atomic_ref_base<_Tp>>;
+ // require types that are 1, 2, 4, 8, or 16 bytes in length to be aligned to at least their size to be potentially
+ // used lock-free
+ static constexpr size_t __min_alignment = (sizeof(_Tp) & (sizeof(_Tp) - 1)) || (sizeof(_Tp) > 16) ? 0 : sizeof(_Tp);
+
public:
using value_type = _Tp;
- static constexpr size_t required_alignment = alignof(_Tp);
+ static constexpr size_t required_alignment = alignof(_Tp) > __min_alignment ? alignof(_Tp) : __min_alignment;
// The __atomic_always_lock_free builtin takes into account the alignment of the pointer if provided,
// so we create a fake pointer with a suitable alignment when querying it. Note that we are guaranteed
@@ -218,6 +217,12 @@ struct __atomic_ref_base {
}
_LIBCPP_HIDE_FROM_ABI void notify_one() const noexcept { std::__atomic_notify_one(*this); }
_LIBCPP_HIDE_FROM_ABI void notify_all() const noexcept { std::__atomic_notify_all(*this); }
+
+protected:
+ typedef _Tp _Aligned_Tp __attribute__((aligned(required_alignment)));
+ _Aligned_Tp* __ptr_;
+
+ _LIBCPP_HIDE_FROM_ABI __atomic_ref_base(_Tp& __obj) : __ptr_(std::addressof(__obj)) {}
};
template <class _Tp>
diff --git a/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp b/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
index acdbf63a24d85..78e46c0397951 100644
--- a/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
+++ b/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
@@ -54,7 +54,7 @@ void check_always_lock_free(std::atomic_ref<T> const& a) {
#define CHECK_ALWAYS_LOCK_FREE(T) \
do { \
typedef T type; \
- type obj{}; \
+ alignas(std::atomic_ref<type>::required_alignment) type obj{}; \
std::atomic_ref<type> a(obj); \
check_always_lock_free(a); \
} while (0)
More information about the llvm-branch-commits
mailing list