[Openmp-dev] CUDA error is: device kernel image is invalid
Itaru Kitayama via Openmp-dev
openmp-dev at lists.llvm.org
Sun Dec 13 00:52:16 PST 2020
Hi,
On JURECA-DC at JSC, I am seeing a run-time error like below:
Libomptarget --> Init target library!
Libomptarget --> Loading RTLs...
Libomptarget --> Loading library 'libomptarget.rtl.ppc64.so'...
Libomptarget --> Unable to load library 'libomptarget.rtl.ppc64.so':
libomptarget.rtl.ppc64.so: cannot open shared object file: No such
file or directory!
Libomptarget --> Loading library 'libomptarget.rtl.x86_64.so'...
Libomptarget --> Successfully loaded library 'libomptarget.rtl.x86_64.so'!
Libomptarget --> Registering RTL libomptarget.rtl.x86_64.so supporting
4 devices!
Libomptarget --> Loading library 'libomptarget.rtl.cuda.so'...
Target CUDA RTL --> Start initializing CUDA
Libomptarget --> Successfully loaded library 'libomptarget.rtl.cuda.so'!
Libomptarget --> Registering RTL libomptarget.rtl.cuda.so supporting 4 devices!
Libomptarget --> Loading library 'libomptarget.rtl.aarch64.so'...
Libomptarget --> Unable to load library 'libomptarget.rtl.aarch64.so':
libomptarget.rtl.aarch64.so: cannot open shared object file: No such
file or directory!
Libomptarget --> Loading library 'libomptarget.rtl.ve.so'...
Libomptarget --> Unable to load library 'libomptarget.rtl.ve.so':
libomptarget.rtl.ve.so: cannot open shared object file: No such file
or directory!
Libomptarget --> Loading library 'libomptarget.rtl.amdgpu.so'...
Libomptarget --> Unable to load library 'libomptarget.rtl.amdgpu.so':
libomptarget.rtl.amdgpu.so: cannot open shared object file: No such
file or directory!
Libomptarget --> RTLs loaded!
Libomptarget --> Image 0x00000000004031e0 is NOT compatible with RTL
libomptarget.rtl.x86_64.so!
Libomptarget --> Image 0x00000000004031e0 is compatible with RTL
libomptarget.rtl.cuda.so!
Libomptarget --> RTL 0x00000000021f5910 has index 0!
Libomptarget --> Registering image 0x00000000004031e0 with RTL
libomptarget.rtl.cuda.so!
Libomptarget --> Done registering entries!
Libomptarget --> Call to omp_get_num_devices returning 4
Libomptarget --> Default TARGET OFFLOAD policy is now mandatory
(devices were found)
Libomptarget --> Entering target region with entry point
0x0000000000403058 and device Id -1
Libomptarget --> Checking whether device 0 is ready.
Libomptarget --> Is the device 0 (local ID 0) initialized? 0
Target CUDA RTL --> Init requires flags to 1
Target CUDA RTL --> Getting device 0
Target CUDA RTL --> The primary context is inactive, set its flags to
CU_CTX_SCHED_BLOCKING_SYNC
Target CUDA RTL --> Max CUDA blocks per grid 2147483647 exceeds the
hard team limit 65536, capping at the hard limit
Target CUDA RTL --> Using 1024 CUDA threads per block
Target CUDA RTL --> Using warp size 32
Target CUDA RTL --> Device supports up to 65536 CUDA blocks and 1024
threads with a warp size of 32
Target CUDA RTL --> Default number of teams set according to library's
default 128
Target CUDA RTL --> Default number of threads set according to
library's default 128
Libomptarget --> Device 0 is ready to use.
Target CUDA RTL --> Load data from image 0x00000000004031e0
Target CUDA RTL --> Error returned from cuModuleLoadDataEx
Target CUDA RTL --> CUDA error is: device kernel image is invalid
Libomptarget --> Unable to generate entries table for device id 0.
Libomptarget --> Failed to init globals on device 0
Libomptarget --> Failed to get device 0 ready
Libomptarget error: run with env LIBOMPTARGET_INFO>1 to dump
host-target pointer maps
Libomptarget error: Build with debug information to provide more
informationLibomptarget fatal error 1: failure of target construct
while offloading is mandatory
/var/spool/parastation/jobs/8813449: line 22: 22331 Aborted
(core dumped) ./a.out
Here's the sample code:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <omp.h>
static double inline calc_pi(size_t n)
{
double h = 1.0 / n;
double sum = 0.0, x;
size_t i;
#pragma omp parallel for simd private(x) reduction(+:sum)
for (i = 0; i < n; i++) {
x = h * ((double) i + 0.5);
sum += 4.0 / (1.0 + x * x);
}
return sum * h;
}
//#pragma omp declare target
int t1(void) {
printf("eeee\n");
return 0;
}
//#pragma omp end declare target
struct A {
};
const size_t N = 2000000000;
int main()
{
double start, pi, end;
#pragma omp target
{}
printf(">> host: start\n");
start = omp_get_wtime();
pi = calc_pi(N);
end = omp_get_wtime();
printf(">> host: pi = %f (%f ms)\n", pi, (end - start) * 1000);
printf(">> target: start\n");
start = omp_get_wtime();
double h = 1.0 / N;
double sum = 0.0, x;
size_t i;
#pragma omp target teams distribute parallel for simd
private(x) map(sum) reduction(+:sum)
for (i = 0; i < N; i++) {
x = h * ((double) i + 0.5);
sum += 4.0 / (1.0 + x * x);
//t1();
//struct A a;
}
pi = sum * h;
end = omp_get_wtime();
printf(">> target: pi = %f (%f ms)\n", pi, (end - start) * 1000);
return EXIT_SUCCESS;
}
More information about the Openmp-dev
mailing list