[PATCH] D16593: [CUDA] Implemented device-side support for functions in <cmath>.

Justin Lebar via cfe-commits cfe-commits at lists.llvm.org
Tue Jan 26 11:57:53 PST 2016


jlebar added a comment.

Missing (?) functions:

- div, ldiv, lldiv, imaxdiv
- imaxabs

If you left these out intentionally (I forget if nvidia supports div_t), that's probably fine, but maybe add a comment?

wrt the "::" comments, some are nits because I think we end up calling the right thing, assuming that nobody defines the wrong thing in namespace std.  But some aren't nits because afaict they lead to infinite recursion.  We should probably just give fully-qualified names for all functions we call.


================
Comment at: lib/Headers/__clang_cuda_cmath.h:33
@@ +32,3 @@
+// builtins if CUDA does not provide a suitable function.
+// We also provide device-side std::abs() for integer types.
+
----------------
Why is std::abs a special case that needs to be called out here?

================
Comment at: lib/Headers/__clang_cuda_cmath.h:38
@@ +37,3 @@
+namespace std {
+__DEVICE__ long long abs(long long n) { return ::llabs(n); }
+__DEVICE__ long abs(long n) { return ::labs(n); }
----------------
We should probably respect the standard and not define functions that aren't available in C++11 if we're not in c++11 mode.

================
Comment at: lib/Headers/__clang_cuda_cmath.h:93
@@ +92,3 @@
+  return __builtin_fpclassify(FP_NAN, FP_INFINITE, FP_NORMAL, FP_SUBNORMAL,
+                              FP_ZERO, x);
+}
----------------
Looking through bugzilla, it appears that this builtin may try to invoke library functions, which may or may not exist in our case (e.g. extern "C" fabs vs ::fabs).  It's probably worth checking this one in particular to make sure it works.

================
Comment at: lib/Headers/__clang_cuda_cmath.h:99
@@ +98,3 @@
+}
+__DEVICE__ float frexp(float arg, int *exp) { return frexpf(arg, exp); }
+__DEVICE__ double frexp(double arg, int *exp) { return frexp(arg, exp); }
----------------
Nit, ::

================
Comment at: lib/Headers/__clang_cuda_cmath.h:100
@@ +99,3 @@
+__DEVICE__ float frexp(float arg, int *exp) { return frexpf(arg, exp); }
+__DEVICE__ double frexp(double arg, int *exp) { return frexp(arg, exp); }
+__DEVICE__ float hypot(float x, float y) { return ::hypotf(x, y); }
----------------
Need :: to avoid infinite recursion, right?

================
Comment at: lib/Headers/__clang_cuda_cmath.h:105
@@ +104,3 @@
+__DEVICE__ int ilogb(double arg) { return ::ilogb(arg); }
+__DEVICE__ bool isfinite(float x) { return __finitef(x); }
+__DEVICE__ bool isfinite(double x) { return __finite(x); }
----------------
Where's __finitef coming from?  Same for the other __ functions used here.

================
Comment at: lib/Headers/__clang_cuda_cmath.h:105
@@ +104,3 @@
+__DEVICE__ int ilogb(double arg) { return ::ilogb(arg); }
+__DEVICE__ bool isfinite(float x) { return __finitef(x); }
+__DEVICE__ bool isfinite(double x) { return __finite(x); }
----------------
jlebar wrote:
> Where's __finitef coming from?  Same for the other __ functions used here.
Nit: "::" in front of all the __ functions.

================
Comment at: lib/Headers/__clang_cuda_cmath.h:146
@@ +145,3 @@
+__DEVICE__ long labs(long n) { return ::labs(n); }
+__DEVICE__ float ldexp(float arg, int exp) { return ldexpf(arg, exp); }
+__DEVICE__ double ldexp(double arg, int exp) { return ldexp(arg, exp); }
----------------
Nit: ::ldexpf?

================
Comment at: lib/Headers/__clang_cuda_cmath.h:147
@@ +146,3 @@
+__DEVICE__ float ldexp(float arg, int exp) { return ldexpf(arg, exp); }
+__DEVICE__ double ldexp(double arg, int exp) { return ldexp(arg, exp); }
+__DEVICE__ float lgamma(float x) { return ::lgammaf(x); }
----------------
:: not optional on this one, right?


http://reviews.llvm.org/D16593





More information about the cfe-commits mailing list