[llvm-bugs] [Bug 35249] New: [CUDA][NVPTX] Incorrect compilation of __activemask()
via llvm-bugs
llvm-bugs at lists.llvm.org
Wed Nov 8 09:32:27 PST 2017
https://bugs.llvm.org/show_bug.cgi?id=35249
Bug ID: 35249
Summary: [CUDA][NVPTX] Incorrect compilation of __activemask()
Product: new-bugs
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: new bugs
Assignee: unassignedbugs at nondot.org
Reporter: acjacob at us.ibm.com
CC: llvm-bugs at lists.llvm.org
Hi,
Compiling and running the following CUDA program with Clang/LLVM versus NVCC
produces different results on a Volta GPU with CUDA-9.0. I believe there may
be a bug in LLVM.
__global__ void kernel() {
printf ("Activemask %x\n", __activemask());
if (threadIdx.x % 2 == 0)
printf ("Activemask %x\n", __activemask());
}
__activemask() returns a bitmask specifying the number of lanes in a warp that
execute the instruction synchronously. If I launch the kernel with 1
threadblock and 32 threads I expect the first printf to show all 32 lanes
active and the second to show only the even ones as active.
I get the following output when compiled with NVCC:
Activemask ffffffff
<<32 times>>
Activemask 55555555
<<16 times>>
When compiled with Clang/LLVM I get the following output:
Activemask ffffffff
<<48 times>>
__activemask() gets resolved to the LLVM intrinsic llvm.nvvm.vote.ballot(i1
true). I get the incorrect IR after 'Early CSE'. I can prevent the
optimization by marking the intrinsic as having a side-effect
(IntrHasSideEffects and removing IntrNoMem) in
include/llvm/IR/IntrinsicsNVVM.td. Is this the appropriate property? It
probably needs to be done for all vote intrinsics, at least for Volta.
Thanks.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20171108/8e90768f/attachment.html>
More information about the llvm-bugs
mailing list