[llvm-bugs] [Bug 40473] New: Slow OpenMP reduction on NVIDIA GPUs - 100x performance regression
via llvm-bugs
llvm-bugs at lists.llvm.org
Fri Jan 25 16:59:57 PST 2019
https://bugs.llvm.org/show_bug.cgi?id=40473
Bug ID: 40473
Summary: Slow OpenMP reduction on NVIDIA GPUs - 100x
performance regression
Product: OpenMP
Version: unspecified
Hardware: PC
OS: Linux
Status: NEW
Severity: normal
Priority: P
Component: Clang Compiler Support
Assignee: unassignedclangbugs at nondot.org
Reporter: csdaley at lbl.gov
CC: llvm-bugs at lists.llvm.org
Created attachment 21390
--> https://bugs.llvm.org/attachment.cgi?id=21390&action=edit
Program to reproduce the slow OpenMP reduction on GPUs
The Clang-8.0.0rc1 and git version of Clang-9.0.0 have a significant
performance regression in OpenMP reductions on the GPU in my test code. I am
using a platform with Intel Skylake CPUs and NVIDIA Volta GPUs. The slow-down
appears related to the large number of Clang-chosen OpenMP teams. The OpenMP
trace shows the following information about the slow reduction kernel:
Target CUDA RTL --> Setting CUDA threads per block to default 128
Target CUDA RTL --> Using 781250 teams due to loop trip count 100000000 and
number of threads per block 128
Target CUDA RTL --> Launch kernel with 781250 blocks and 128 threads
Target CUDA RTL --> Launch of entry point at 0x000000000300d120 successful!
Target CUDA RTL --> Kernel execution at 0x000000000300d120 successful!
A Jan 5 2019 version of LLVM/Clang has much better performance. The OpenMP
trace from this version of Clang shows the following information:
Target CUDA RTL --> Setting CUDA threads per block to default 128
Target CUDA RTL --> Using default number of teams 128
Target CUDA RTL --> Launch kernel with 128 blocks and 128 threads
Target CUDA RTL --> Launch of entry point at 0x00000000030091b0 successful!
Target CUDA RTL --> Kernel execution at 0x00000000030091b0 successful!
I've attached a hacked version of the STREAM micro-benchmark to demonstrate the
problem. The key code is as follows:
#ifdef USE_128_TEAMS
# pragma omp target teams distribute parallel for map(tofrom:sum)
reduction(+:sum) num_teams(128)
#else
# pragma omp target teams distribute parallel for map(tofrom:sum)
reduction(+:sum)
#endif
for (j=0; j<STREAM_ARRAY_SIZE; j++) {
sum += a[j] * b[j];
}
The bad performance occurs when using the default number of OpenMP thread
teams:
> clang -O3 -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda dot.c -o dot && srun -n 1 ./dot
...
-------------------------------------------------------------
Function Best Rate MB/s Avg time Min time Max time
Dot: 1097.5 1.458776 1.457925 1.459023
-------------------------------------------------------------
The performance loss can be recovered by requesting 128 thread teams only:
> clang -DUSE_128_TEAMS -O3 -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda dot.c -o dot && srun -n 1 ./dot
...
Function Best Rate MB/s Avg time Min time Max time
Dot: 197698.8 0.008628 0.008093 0.010635
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190126/51ddc540/attachment.html>
More information about the llvm-bugs
mailing list