[libcxx-commits] [PATCH] D114129: [libc++] Fix `uniform_int_distribution` for 128-bit result type

Mark de Wever via Phabricator via libcxx-commits libcxx-commits at lists.llvm.org
Wed Nov 24 11:17:21 PST 2021


Mordante requested changes to this revision.
Mordante added a comment.
This revision now requires changes to proceed.

Thanks for looking into this issue!



================
Comment at: libcxx/include/__bits:49
+inline _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR
+int __libcpp_clz(__uint128_t __x) _NOEXCEPT {
+  return ((__x >> 64) == 0)
----------------
In the header `<bit>` there is `__countl_zero` which has 128-bit support. Can we use that function instead?


================
Comment at: libcxx/include/__random/log2.h:23
+template <class _UIntType, _UIntType _Xp, size_t _Rp>
+struct __log2_imp;
+
----------------
Does this header the same as `__bit_log2` in `<bit>`?


================
Comment at: libcxx/test/std/numerics/rand/rand.dis/rand.dist.uni/rand.dist.uni.int/int128.pass.cpp:35
+      const __int128_t n = d(e);
+      all_in_range = all_in_range && (n <= UINT64_MAX);
+    }
----------------
This feels flaky. If the tests happens to produce 100 random values below `UINT64_MAX` the test fails.
I suggest to select a specific random number engine with a fixed seed for this test.
Then you can add a stable test to see whether values in the wanted range are produced.


================
Comment at: libcxx/test/std/numerics/rand/rand.dis/rand.dist.uni/rand.dist.uni.int/int128.pass.cpp:51
+      assert(n >= a && n <= b);
+      all_in_range = all_in_range && (n >= INT64_MIN) && (n <= (INT64_MAX));
+    }
----------------
Why do we need this test? Isn't `assert(n >= a && n <= b);` enough to test?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D114129/new/

https://reviews.llvm.org/D114129



More information about the libcxx-commits mailing list