<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - Slow OpenMP reduction on NVIDIA GPUs - 100x performance regression"
href="https://bugs.llvm.org/show_bug.cgi?id=40473">40473</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>Slow OpenMP reduction on NVIDIA GPUs - 100x performance regression
</td>
</tr>
<tr>
<th>Product</th>
<td>OpenMP
</td>
</tr>
<tr>
<th>Version</th>
<td>unspecified
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Linux
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Clang Compiler Support
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedclangbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>csdaley@lbl.gov
</td>
</tr>
<tr>
<th>CC</th>
<td>llvm-bugs@lists.llvm.org
</td>
</tr></table>
<p>
<div>
<pre>Created <span class=""><a href="attachment.cgi?id=21390" name="attach_21390" title="Program to reproduce the slow OpenMP reduction on GPUs">attachment 21390</a> <a href="attachment.cgi?id=21390&action=edit" title="Program to reproduce the slow OpenMP reduction on GPUs">[details]</a></span>
Program to reproduce the slow OpenMP reduction on GPUs
The Clang-8.0.0rc1 and git version of Clang-9.0.0 have a significant
performance regression in OpenMP reductions on the GPU in my test code. I am
using a platform with Intel Skylake CPUs and NVIDIA Volta GPUs. The slow-down
appears related to the large number of Clang-chosen OpenMP teams. The OpenMP
trace shows the following information about the slow reduction kernel:
Target CUDA RTL --> Setting CUDA threads per block to default 128
Target CUDA RTL --> Using 781250 teams due to loop trip count 100000000 and
number of threads per block 128
Target CUDA RTL --> Launch kernel with 781250 blocks and 128 threads
Target CUDA RTL --> Launch of entry point at 0x000000000300d120 successful!
Target CUDA RTL --> Kernel execution at 0x000000000300d120 successful!
A Jan 5 2019 version of LLVM/Clang has much better performance. The OpenMP
trace from this version of Clang shows the following information:
Target CUDA RTL --> Setting CUDA threads per block to default 128
Target CUDA RTL --> Using default number of teams 128
Target CUDA RTL --> Launch kernel with 128 blocks and 128 threads
Target CUDA RTL --> Launch of entry point at 0x00000000030091b0 successful!
Target CUDA RTL --> Kernel execution at 0x00000000030091b0 successful!
I've attached a hacked version of the STREAM micro-benchmark to demonstrate the
problem. The key code is as follows:
#ifdef USE_128_TEAMS
# pragma omp target teams distribute parallel for map(tofrom:sum)
reduction(+:sum) num_teams(128)
#else
# pragma omp target teams distribute parallel for map(tofrom:sum)
reduction(+:sum)
#endif
for (j=0; j<STREAM_ARRAY_SIZE; j++) {
sum += a[j] * b[j];
}
The bad performance occurs when using the default number of OpenMP thread
teams:
<span class="quote">> clang -O3 -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda dot.c -o dot && srun -n 1 ./dot</span >
...
-------------------------------------------------------------
Function Best Rate MB/s Avg time Min time Max time
Dot: 1097.5 1.458776 1.457925 1.459023
-------------------------------------------------------------
The performance loss can be recovered by requesting 128 thread teams only:
<span class="quote">> clang -DUSE_128_TEAMS -O3 -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda dot.c -o dot && srun -n 1 ./dot</span >
...
Function Best Rate MB/s Avg time Min time Max time
Dot: 197698.8 0.008628 0.008093 0.010635</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>