[cfe-dev] Backend for C and OpenCL

Justin Holewinski justin.holewinski at gmail.com
Wed Oct 5 11:48:22 PDT 2011


On Wed, Oct 5, 2011 at 2:39 PM, Guoping Long <longguoping at gmail.com> wrote:

> Even if this works for Nvidia hardware, what about ATI series? By saying
> this, I do not quite understand the underlying motivation to implement
> another (workable, optimized) GPU backend. This requires non-trivial work,
> and there is already very good commercial support. In addition, since such
> backend optimizations inherently rely on specific hardware details, it's
> hard for me to be optimistic that an open source version can work comparably
> well with the vendor provided version.


The idea is to harness the LLVM optimization and analysis passes to generate
optimized GPU kernels.  On one hand, this has nothing to do with
OpenCL/CUDA; it provides a way for front-ends to directly target NVidia GPU
devices without having to first convert to OpenCL or CUDA.  Such a
conversion puts you at the mercy of the vendor front-ends.  On the other
hand, it provides a way to try to go beyond what nvcc can do, in terms of
optimizations.  There is definitely good commercial support, but that
support is in terms of black boxes that we ultimately have no control over.


>
> I heartedly agree that providing support for OpenCL and CUDA on Clang is
> important and useful. Working on source level optimizations may be more
> interesting. I remember somebody proposed such an interesting idea in this
> community: transform CUDA to OpenCL code and do optimizations. I would love
> such kind of ideas, is it unfeasible or too trivial to implement?
>

Converting CUDA to OpenCL would definitely be non-trivial, especially when
you start considering the CUDA C++ support.  It's probably feasible, though
not really in the scope of Clang.


>
> Please let me know if I miss something important. Thanks.
>
> ----
> Guoping
>
> 2011/10/5 Justin Holewinski <justin.holewinski at gmail.com>
>
>> On Wed, Oct 5, 2011 at 2:02 PM, Guoping Long <longguoping at gmail.com>wrote:
>>
>>> At least for me, I believe this is a very interesting project. I may
>>> consider contribute later on when it actually moves on.
>>> To really optimizing OpenCL codes at backend is too challenging, because
>>> this relies on very much hardware specific information, which may not be
>>> disclosed.
>>> So I fully agree to transform the code back to source when it is
>>> optimized. But why at the IR level, not the AST? Isn't transforming on AST
>>> much easier and reasonable?
>>>
>>
>> If you're interested in NVidia hardware, the OpenCL->PTX workflow is
>> mostly implemented in LLVM ToT (will be part of LLVM 3.0).  The main missing
>> part at the moment is OpenCL work-item function calls, which have to be
>> implemented in terms of PTX intrinsics.  It's not difficult, but I haven't
>> written a library to do that yet.  Once you have PTX, you can use the CUDA
>> Driver API to load and execute the kernel.
>>
>> You can play around with it by using the following Clang options:
>>
>> $ clang -ccc-host-triple ptx32 -S <kernel>.cl
>>
>> You can define your work-item functions as something like the following:
>>
>> __attribute__((always_inline))
>> int get_group_id(int dim) {
>>   if (dim == 0)
>>     return __builtin_ptx_read_ctaid_x();
>>   else if (dim == 1)
>>     return __builtin_ptx_read_ctaid_y();
>>   else
>>     return __builtin_ptx_read_ctaid_z();
>> }
>>
>> __attribute__((always_inline))
>> int get_global_id(int dim) {
>>   if (dim == 0)
>>     return
>> __builtin_ptx_read_ctaid_x()*__builtin_ptx_read_ntid_x()+__builtin_ptx_read_tid_x();
>>   else if (dim == 1)
>>     return
>> __builtin_ptx_read_ctaid_y()*__builtin_ptx_read_ntid_y()+__builtin_ptx_read_tid_y();
>>   else
>>     return
>> __builtin_ptx_read_ctaid_z()*__builtin_ptx_read_ntid_z()+__builtin_ptx_read_tid_z();
>> }
>>
>> __attribute__((always_inline))
>> int get_local_id(int dim) {
>>   if (dim == 0)
>>     return __builtin_ptx_read_tid_x();
>>   else if (dim == 1)
>>     return __builtin_ptx_read_tid_y();
>>   else
>>     return __builtin_ptx_read_tid_z();
>> }
>>
>> __attribute__((always_inline))
>> int get_global_size(int dim) {
>>   if (dim == 0)
>>     return __builtin_ptx_read_nctaid_x()*__builtin_ptx_read_ntid_x();
>>   else if (dim == 1)
>>     return __builtin_ptx_read_nctaid_y()*__builtin_ptx_read_ntid_y();
>>   else
>>     return __builtin_ptx_read_nctaid_z()*__builtin_ptx_read_ntid_z();
>> }
>>
>> #define barrier(kind) __builtin_ptx_bar_sync(kind)
>>
>> #define CLK_LOCAL_MEM_FENCE 0
>>
>>
>>
>>
>>
>>>
>>> ------
>>> Guoping
>>>
>>> 2011/10/5 Alberto Magni <alberto.magni86 at gmail.com>
>>>
>>>>  Hi everybody,
>>>>
>>>> for a research project I would like to use LLVM to optimize
>>>> OpenCL programs for GPUs.
>>>>
>>>> Due to the lack of open-source back-ends and runtimes for
>>>> GPUs my idea is the following:
>>>> 1) compile OpenCL C into LLVM-IR (for what I read on the ML
>>>> full support is close, at least foreseeable),
>>>> 2) apply LLVM transformations to the bitcode,
>>>> 3) generate the OpenCL C code from the optimized bitcode,
>>>> 4) use the official (Nvidia, AMD, Intel, ....) OpenCL compilers
>>>> and runtimes for the actual execution of the optimized code
>>>>
>>>> I know that the C backend is buggy and it is no more
>>>> supported but it still works with simple C programs.
>>>> Remeber that OpenCL programs are usually quite simple
>>>> (no function pointers, etc...)
>>>>
>>>> The main features to be added to the backend are:
>>>> 1) the "__kernel" keyword,
>>>> 2) the four address spaces keywords
>>>> 3) vector data types
>>>> 4) the half keyword
>>>>
>>>> My idea is to extensively verify the functionality the C-backend for
>>>> C programs (similar to OpenCL-C ones) and possibly add the listed
>>>> features.
>>>>
>>>> What do you think of this ? Is it feasible ?
>>>>
>>>> Thank you,
>>>>
>>>> Alberto
>>>>
>>>> _______________________________________________
>>>> cfe-dev mailing list
>>>> cfe-dev at cs.uiuc.edu
>>>> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> cfe-dev mailing list
>>> cfe-dev at cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
>>>
>>>
>>
>>
>> --
>>
>> Thanks,
>>
>> Justin Holewinski
>>
>>
>


-- 

Thanks,

Justin Holewinski
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20111005/a9d424a0/attachment.html>


More information about the cfe-dev mailing list