<div class="socmaildefaultfont" dir="ltr" style="font-family:"Helvetica Neue", Helvetica, Arial, sans-serif;font-size:10.5pt" ><div dir="ltr" >This should be fixed in r285263. </div>
<div dir="ltr" > </div>
<div dir="ltr" >Thanks!</div>
<div dir="ltr" >Samuel</div>
<div dir="ltr" > </div>
<blockquote data-history-content-modified="1" dir="ltr" style="border-left:solid #aaaaaa 2px; margin-left:5px; padding-left:5px; direction:ltr; margin-right:0px" >----- Original message -----<br>From: Justin Lebar <jlebar@google.com><br>To: Gurunath Kadam <gurunath.kadam@gmail.com>, Samuel F Antao/Watson/IBM@IBMUS<br>Cc: Reid Kleckner <rnk@google.com>, cfe-dev <cfe-dev@lists.llvm.org>, Artem Belevich <tra@google.com><br>Subject: Re: [llvm-dev] LLVM/CUDA generate LLVM IR<br>Date: Wed, Oct 26, 2016 10:05 PM<br>
<div><font face="Default Monospace,Courier New,Courier,monospace" size="2" >To close the loop, I found the change that introduced this crash and<br>pinged the author of the change. Hopefully we can get this fixed<br>soon.<br><br><a href="https://reviews.llvm.org/D18172#580276" target="_blank" >https://reviews.llvm.org/D18172#580276</a><br><br>On Thu, Oct 13, 2016 at 2:21 PM, Justin Lebar <jlebar@google.com> wrote:<br>> Thank you very much for the testcases -- I'll look into fixing the<br>> assertion failure.<br>><br>>> I think --cuda-gpu-arch=sm_35 and --cuda-path=/usr/local/cuda/ should be included, as the resulting code might be optimized for that architecture.<br>><br>> You want --cuda-gpu-arch=sm_35, otherwise we'll default to sm_20.<br>> Which doesn't make a huge difference beyond affecting which intrinsics<br>> are available to you, but still. You also want to pass sm_35 because<br>> that will affect how we invoke ptxas -- passing sm_35 will cause us to<br>> use ptxas to generate GPU code specifically for sm_35. If you don't<br>> pass this but then run on an sm_35 GPU, the GPU driver will have to<br>> generate code at runtime, and this can be very slow.<br>><br>> --cuda-path is optional, only required if clang can't find the CUDA<br>> installation, or if you want to specify a different one than what it<br>> finds by default. You can see which one it finds by invoking clang<br>> -v.<br>><br>> On Thu, Oct 13, 2016 at 2:17 PM, Gurunath Kadam<br>> <gurunath.kadam@gmail.com> wrote:<br>>> Hi,<br>>><br>>> Thank you Justin for your prompt reply. I was able to generate an LLVM IR.<br>>><br>>> For the error reproduction purposes, I have listed below all the commands<br>>> which worked and which did not work.<br>>><br>>> Works (I have not yet checked if files generated by all of them are same or<br>>> not):<br>>><br>>> clang++ -O3 -emit-llvm -c axpy.cu -o axpy.bc --cuda-gpu-arch=sm_35<br>>> --cuda-path=/usr/local/cuda/ --cuda-device-only<br>>><br>>> clang++ -O3 -emit-llvm -c axpy.cu -o axpy.bc --cuda-device-only<br>>><br>>> Does not work:<br>>><br>>> clang++ -O3 -emit-llvm -c axpy.cu --cuda-gpu-arch=sm_35 -o axpy.bc<br>>><br>>> I think --cuda-gpu-arch=sm_35 and --cuda-path=/usr/local/cuda/ should be<br>>> included, as the resulting code might be optimized for that architecture. I<br>>> might be wrong though.<br>>><br>>> Thank you again.<br>>><br>>> -Guru<br>>><br>>> On Thu, Oct 13, 2016 at 4:38 PM, Justin Lebar <jlebar@google.com> wrote:<br>>>><br>>>> If you add -### to your original command, you'll see that for CUDA<br>>>> compilations, we invoke clang -cc1 twice: Once for the host, and once<br>>>> for the device. We can't emit llvm or asm for both host and device at<br>>>> once, so you need to tell clang which one you want.<br>>>><br>>>> The flag to do this is --cuda-device-only (or --cuda-host-only).<br>>>><br>>>> Alternatively, you could compile with -save-temps to get everything.<br>>>><br>>>> Feel free to send me a patch adding this information to<br>>>> <a href="http://llvm.org/docs/CompileCudaWithLLVM.html" target="_blank" >http://llvm.org/docs/CompileCudaWithLLVM.html</a> so that we can help<br>>>> others avoid this hiccup. The document lives in<br>>>> llvm/docs/CompileCudaWithLLVM.rst.<br>>>><br>>>> > I tried adding -S -emit-llvm and changed the output file name, but I<br>>>> > keep getting following error:<br>>>><br>>>> That is a bug -- we should give you a meaningful error. It looks like<br>>>> this bug was probably introduced by the generic offloading driver<br>>>> changes.<br>>>><br>>>> I am having difficulty reproducing the assertion failure, however.<br>>>> Can you please provide a concrete steps to reproduce?<br>>>><br>>>> Regards,<br>>>> -Justin<br>>>><br>>>> On Thu, Oct 13, 2016 at 1:28 PM, Reid Kleckner <rnk@google.com> wrote:<br>>>> > Moving to cfe-dev<br>>>> ><br>>>> > +Art and Justin<br>>>> ><br>>>> > On Thu, Oct 13, 2016 at 1:13 PM, Gurunath Kadam via llvm-dev<br>>>> > <llvm-dev@lists.llvm.org> wrote:<br>>>> >><br>>>> >> So for a c program we do:<br>>>> >><br>>>> >> clang -O3 -emit-llvm hello.c -c -o hello.bc<br>>>> >><br>>>> >> But how to generate an LLVM IR when working with CUDA.<br>>>> >><br>>>> >> for normal compilation:<br>>>> >> clang++ axpy.cu -o axpy --cuda-gpu-arch=<GPU arch> -L<CUDA<br>>>> >> install path>/<lib64 or lib> -lcudart_static -ldl -lrt -pthread<br>>>> >><br>>>> >> I tried adding -S -emit-llvm and changed the output file name, but I<br>>>> >> keep<br>>>> >> getting following error:<br>>>> >><br>>>> >> clang++:<br>>>> >><br>>>> >> /stor/gakadam/llvm_projects/llvm/tools/clang/lib/Driver/Driver.cpp:1618:<br>>>> >> virtual<br>>>> >><br>>>> >> {anonymous}::OffloadingActionBuilder::DeviceActionBuilder::ActionBuilderReturnCode<br>>>> >><br>>>> >> {anonymous}::OffloadingActionBuilder::CudaActionBuilder::getDeviceDepences(clang::driver::OffloadAction::DeviceDependences&,<br>>>> >> clang::driver::phases::ID, clang::driver::phases::ID,<br>>>> >> {anonymous}::OffloadingActionBuilder::DeviceActionBuilder::PhasesTy&):<br>>>> >> Assertion `CurPhase < phases::Backend && "Generating single CUDA "<br>>>> >> "instructions should only occur " "before the backend phase!"' failed.<br>>>> >><br>>>> >> I tried several combinations but no avail!<br>>>> >><br>>>> >> Any suggestions?<br>>>> >><br>>>> >> Thank you.<br>>>> >><br>>>> >> Sincerely,<br>>>> >> Guru<br>>>> >><br>>>> >> _______________________________________________<br>>>> >> LLVM Developers mailing list<br>>>> >> llvm-dev@lists.llvm.org<br>>>> >> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" target="_blank" >http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>>>> >><br>>>> ><br>>><br>>></font><br> </div></blockquote>
<div dir="ltr" > </div></div><BR>