[libcxx-commits] [PATCH] D151717: [libc++][PSTL] Add a GCD backend
Louis Dionne via Phabricator via libcxx-commits
libcxx-commits at lists.llvm.org
Thu Jul 6 10:35:27 PDT 2023
ldionne added inline comments.
================
Comment at: libcxx/include/__algorithm/pstl_backends/cpu_backends/libdispatch.h:74-82
+template <class _Func1, class _Func2>
+_LIBCPP_HIDE_FROM_ABI void __libdispatch_invoke_parallel(_Func1&& __f1, _Func2&& __f2) {
+ __libdispatch::__dispatch_apply(2, [&](size_t __index) {
+ if (__index == 0)
+ __f1();
+ else
+ __f2();
----------------
This isn't used anymore.
================
Comment at: libcxx/include/__algorithm/pstl_backends/cpu_backends/libdispatch.h:188-189
+ auto __partitions = __libdispatch::__partition_chunks(__first, __last);
+ auto __values = std::__make_uninitialized_buffer<_Value[]>(
+ nothrow, __partitions.__chunk_count_, [](_Value* __ptr, size_t __count) { std::destroy_n(__ptr, __count); });
+
----------------
Future optimization: We should allocate only one temporary value per worker thread, not one per chunk. Then we should pad the storage to make sure they all fall on different cache lines to avoid false sharing.
Can you add a TODO comment mentioning that?
================
Comment at: libcxx/src/pstl/libdispatch.cpp:32
+ partitions.__chunk_count_ = [&] {
+ ptrdiff_t cores = std::max(1u, thread::hardware_concurrency());
+
----------------
Can you add a TODO comment to use the number of cores that libdispatch will actually use instead of the total number of cores on the system?
================
Comment at: libcxx/src/pstl/libdispatch.cpp:36-37
+
+ // This is an approximation of `log(1.01, sqrt(n))` which seemes to be reasonable for `n` larger than 500 and tops
+ // at 800 tasks for n ~ 8 million
+ auto large = [](ptrdiff_t n) { return static_cast<ptrdiff_t>(100.499 * std::log(std::sqrt(n))); };
----------------
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D151717/new/
https://reviews.llvm.org/D151717
More information about the libcxx-commits
mailing list