[llvm-dev] support for nvidia Turing GPU
Justin Lebar via llvm-dev
llvm-dev at lists.llvm.org
Fri Dec 28 23:59:38 PST 2018
Those of us who are the main compiler developers don't have any Turing GPUs
in hand, so we haven't tested the compiler on Turing. It's "supported"
today only inasmuch as "it compiles".
I am surprised to see this error and I'm not sure how that's happening. If
you can provide steps to reproduce we might be able to assist you, although
we might need to buy some hardware first...
On Fri, Dec 28, 2018 at 10:26 PM treinz via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> I'm trying to use clang 7.0.1 to compile a simple program as shown on
> https://llvm.org/docs/CompileCudaWithLLVM.html for nvidia RTX2080
> (compute capability 7.5). clang complains about not supporting sm_75. I
> then try the llvm repository head 7a5cc00 from the git mirror and the
> simple program compiles and runs fine. But then I found that for more
> complicated programs, I often got this runtime error:
> cudaDeviceSynchronize() error( cudaErrorIllegalInstruction): an illegal
> instruction was encountered
> cuda-gdb traces this back to __cuda_sm70_warpsync()
> Can someone tell me that if the Turing GPU is currently supported or not?
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev