<div dir="ltr"><div>Hi Gopal,<br><br></div>The reason is absence/presence of open-source IR->ISA translation component.<br><div><div><br>1.c vectorAdd_kernel.ptx is then translated to vectorAdd_kernel.cubin containing device-specific binary assembly. Translation is performed either by NVIDIA CUDA runtime library (see e.g. cuModuleLoad), which is referred as JIT, or with ptxas command line tool. In both cases, translation stage involves closed-source components of NVIDIA CUDA toolkit, which are not part of LLVM. There are some alternatives, such as NVVM, asfermi, and PathScale.<br>
<br>AFAIK, AMD pipleline in contrast has two options: closed-source (Catalyst) and open-source driver.<br><br></div><div>Best,<br></div><div>- D.<br><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2013/10/9 Gopal Rastogi <span dir="ltr"><<a href="mailto:gopalrastogi.mmmec@gmail.com" target="_blank">gopalrastogi.mmmec@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div>Hi guys,<br><br>I am understanding OpenCL compilation flow on GPU in order to develop OpenCL runtime for a new hardware.<br></div><br></div>I understood that OpenCL compiler is part of a vendor's runtime library which is the heart of OpenCL. Since OpenCL kernel is compiled at runtime, hence at high level its compilation takes place in two steps: <br>
</div><div>i. source code is first converted to intermediate code.<br></div><div>ii. intermediate code is then translated to targeted binary code.<br></div><div><br></div><div>let say for example, we have a OpenCL kernel source code vectorAdd_kernel.cl :<br>
</div>1. OpenCL compilation flow on Nvidia GPUs<br></div> a. vectorAdd_kernel.cl is first translated to LLVM IR using clang and <br></div><div> b. LLVM IR is converted into optimized LLVM IR using LLVM optimizer.<br>
</div>
<div> b. optimized LLVM IR is then translated to vectorAdd_kernel.ptx using Back-end<br></div> c. vectorAdd_kernel.ptx is then translated to vectorAdd_kernel.bin file using JIT. Nvidia uses JIT to get benefit in-case when next-generation GPUs are encounterd.<br>
<br></div>2. OpenCL compilation on AMD GPUs<br></div> a. vectorAdd_kernel.cl is first translated to LLVM IR using gcc/clang<br></div> b. LLVM IR is then converted into optimzed LLVM IR using LLVM optimizer.<br></div> c. optimized LLVM IR is then converted into AMD IL.<br>
</div> d. AMD IL is then converted into AMD ISA using shader compiler (GPU JIT). <br><br></div>I understand that AMD uses back-end compilation as part of JIT, instead Nvidia which uses back-end separate from JIT. <br><br>
</div>Is that correct? If it is so then what are the advantages of using JIT separate from back-end?<br><br>Thanks for your comments/opinions, <br></div>
-Gopal<br></div>
<br>_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
<br></blockquote></div><br></div>