[PATCH] D27429: [Chrono][Darwin] On Darwin use CLOCK_UPTIME_RAW instead of CLOCK_MONOTONIC
Howard Hinnant via Phabricator via cfe-commits
cfe-commits at lists.llvm.org
Fri Dec 9 17:39:00 PST 2016
howard.hinnant added a comment.
I would like to offer two thoughts:
1. Timing with `steady_clock` is often used to time very short events. It should be respected as the "high_resolution_clock". This means that two consecutive calls to `steady_clock::now()` should return nanoseconds resolution and should not be equal unless the implementation is actually able to complete a call to `steady_clock::now()` in less than 1ns.
2. Here's test of thought 1:
#include <chrono>
#include <time.h>
#include <errno.h>
#include <sys/sysctl.h>
std::chrono::milliseconds
uptime()
{
using namespace std::chrono;
timeval ts;
auto ts_len = sizeof(ts);
int mib[2] = { CTL_KERN, KERN_BOOTTIME };
auto constexpr mib_len = sizeof(mib)/sizeof(mib[0]);
if (sysctl(mib, mib_len, &ts, &ts_len, nullptr, 0) == 0)
{
system_clock::time_point boot{seconds{ts.tv_sec} + microseconds{ts.tv_usec}};
return duration_cast<milliseconds>(system_clock::now() - boot);
}
return 0ms;
}
std::chrono::nanoseconds
get_uptime_raw()
{
using namespace std::chrono;
struct timespec tp;
clock_gettime(CLOCK_UPTIME_RAW, &tp);
return seconds(tp.tv_sec) + nanoseconds(tp.tv_nsec);
}
std::chrono::nanoseconds
get_monotonic()
{
using namespace std::chrono;
struct timespec tp;
clock_gettime(CLOCK_MONOTONIC, &tp);
return seconds(tp.tv_sec) + nanoseconds(tp.tv_nsec);
}
#include "date.h"
#include <iostream>
template <class Duration>
void
display(Duration time)
{
using namespace date;
auto d = floor<days>(time);
time -= d;
std::cout << d.count() << " days " << make_time(time) << '\n';
}
int
main()
{
using namespace std::chrono;
for (int i = 0; i < 4; ++i)
{
std::cout << i << '\n';
{
auto t0 = uptime();
auto t1 = uptime();
std::cout << "boot time : "; display(t0);
std::cout << "boot time : "; display(t1);
std::cout << "delta boot time : " << nanoseconds{t1 - t0}.count() << "ns\n";
}
{
auto t0 = get_uptime_raw();
auto t1 = get_uptime_raw();
std::cout << "CLOCK_UPTIME_RAW : "; display(t0);
std::cout << "CLOCK_UPTIME_RAW : "; display(t1);
std::cout << "delta CLOCK_UPTIME_RAW time : " << nanoseconds{t1 - t0}.count() << "ns\n";
}
{
auto t0 = get_monotonic();
auto t1 = get_monotonic();
std::cout << "CLOCK_MONOTONIC : "; display(t0);
std::cout << "CLOCK_MONOTONIC : "; display(t1);
std::cout << "delta CLOCK_MONOTONIC time : " << nanoseconds{t1 - t0}.count() << "ns\n";
}
{
auto t0 = std::chrono::steady_clock::now().time_since_epoch();
auto t1 = std::chrono::steady_clock::now().time_since_epoch();
std::cout << "mach_absolute_time : "; display(t0);
std::cout << "mach_absolute_time : "; display(t1);
std::cout << "delta mach_absolute_time time : " << nanoseconds{t1 - t0}.count() << "ns\n";
}
std::cout << '\n';
}
}
Sorry, it requires "date.h" from https://github.com/HowardHinnant/date . It is header-only and portable. It's just used for matting purposes if it really stresses you out.
For me this outputs (at -O3):
Jade:~/Development/cljunk> a.out
0
boot time : 11 days 22:30:42.827
boot time : 11 days 22:30:42.827
delta boot time : 0ns
CLOCK_UPTIME_RAW : 11 days 22:22:28.960672112
CLOCK_UPTIME_RAW : 11 days 22:22:28.960672266
delta CLOCK_UPTIME_RAW time : 154ns
CLOCK_MONOTONIC : 11 days 22:30:42.827318000
CLOCK_MONOTONIC : 11 days 22:30:42.827318000
delta CLOCK_MONOTONIC time : 0ns
mach_absolute_time : 11 days 22:22:28.960714394
mach_absolute_time : 11 days 22:22:28.960714504
delta mach_absolute_time time : 110ns
1
boot time : 11 days 22:30:42.827
boot time : 11 days 22:30:42.827
delta boot time : 0ns
CLOCK_UPTIME_RAW : 11 days 22:22:28.960761867
CLOCK_UPTIME_RAW : 11 days 22:22:28.960761932
delta CLOCK_UPTIME_RAW time : 65ns
CLOCK_MONOTONIC : 11 days 22:30:42.827402000
CLOCK_MONOTONIC : 11 days 22:30:42.827402000
delta CLOCK_MONOTONIC time : 0ns
mach_absolute_time : 11 days 22:22:28.960793667
mach_absolute_time : 11 days 22:22:28.960793747
delta mach_absolute_time time : 80ns
2
boot time : 11 days 22:30:42.827
boot time : 11 days 22:30:42.827
delta boot time : 0ns
CLOCK_UPTIME_RAW : 11 days 22:22:28.960835164
CLOCK_UPTIME_RAW : 11 days 22:22:28.960835227
delta CLOCK_UPTIME_RAW time : 63ns
CLOCK_MONOTONIC : 11 days 22:30:42.827476000
CLOCK_MONOTONIC : 11 days 22:30:42.827476000
delta CLOCK_MONOTONIC time : 0ns
mach_absolute_time : 11 days 22:22:28.960867852
mach_absolute_time : 11 days 22:22:28.960867944
delta mach_absolute_time time : 92ns
3
boot time : 11 days 22:30:42.827
boot time : 11 days 22:30:42.827
delta boot time : 0ns
CLOCK_UPTIME_RAW : 11 days 22:22:28.960911646
CLOCK_UPTIME_RAW : 11 days 22:22:28.960911737
delta CLOCK_UPTIME_RAW time : 91ns
CLOCK_MONOTONIC : 11 days 22:30:42.827553000
CLOCK_MONOTONIC : 11 days 22:30:42.827553000
delta CLOCK_MONOTONIC time : 0ns
mach_absolute_time : 11 days 22:22:28.960945129
mach_absolute_time : 11 days 22:22:28.960945196
delta mach_absolute_time time : 67ns
3. Third thought (off by one errors are rampant! ;-) ):
`CLOCK_MONOTONIC` gives a more accurate report of system uptime, and thus more accurately respects the intent of `steady_clock`'s definition. However the difference is slight. Only `CLOCK_UPTIME_RAW` and `mach_absolute_time` are able to time functions in the nanosecond range, which is the most important use case for `steady_clock`.
Imho, only `CLOCK_UPTIME_RAW` or `mach_absolute_time` are acceptable implementations of `steady_clock` on macOS. I feel strongly enough about this that I would like to see a `static_assert` that the `CLOCK_MONOTONIC` is never accidentally chosen by the preprocessor when targeting macOS, or iOS. I can't directly speak to other platforms. But I would like to see tests such as this applied to other platforms. `steady_clock` should be able to measure short events without returning 0ns. The Windows experience with `<chrono>` has taught as well that this would be a dissatisfying experience to customers.
https://reviews.llvm.org/D27429
More information about the cfe-commits
mailing list