[LLVMdev] GSoC 2012 Proposal: Automatic GPGPU code generation for llvm

Tobias Grosser tobias at grosser.es
Tue Apr 3 04:49:29 PDT 2012


On 04/02/2012 04:16 PM, Yabin Hu wrote:
> Hi all,
>
> I am a phd student from Huazhong University of Sci&Tech, China. The
> following is my GSoC 2012 proposal.

Hi Yabin,

> Comments are welcome!
>
> *Title: Automatic GPGPU Code Generation for LLVM*
>
> *Abstract*
> Very often, manually developing an GPGPU application is a
                        developing a GPGPU

> time-consuming, complex, error-prone and iterativeprocess. In this
                                            iterative process.

> project, I propose to build an automatic GPGPU code generation framework
> for LLVM, based on two successful LLVM (sub-)projects - Polly and PTX
> backend. This can be very useful to ease the burden of the long learning
> curve of various GPU programming model.
                                    models.
I like the idea ;-)

Please submit a first version of this proposal to the Google SoC web 
application. You can refine it later, but it is important that it is 
officially registered. Like this you are on the save side, in case 
something unexpected happens the last days.

> *Motivation*
> With the broad proliferation of GPU computing, it is very important to
> provide an easy and automatic tool to develop or port the applications
> to GPU for normal developers, especially for those domain experts who
> want to harness the huge computing power of GPU.
> Polly has implemented
> many transformations, such as tiling, auto-vectorization and openmp code
> generation. With the help of LLVM's PTX backend, I plan to extend Polly
> with the feature of GPGPU code generation.

> *Project Detail*
> In this project, we target various parallel loops which can be described
> by Polly's polyhedral model. We first translated the selected SCoPs
> (Static Control Parts) into 4-depth loops with Polly's schedule
> optimization.
> Then we extract the loop body (or inner non-parallel
> loops) into a LLVM sub-function, tagged with PTX_Kernel or PTX_Device
> call convention. After that, we use PTX backend to translate the
> subfunctions into a string of the corresponding PTX codes. Finally, we
> provide an runtime library to generate the executable program.

I would distinguish here between the infrastructure features that you 
add to Polly and the actual code generation/scheduling strategy you will 
follow. It should become clear that the infrastructure changes are 
independent of the actual code generation strategy you use.
This is especially important as automatic GPGPU code generation is a 
complex problem. I doubt it will be possible to implement a perfect 
solution within three months. Hence, I would target a (very) simple code
generation strategy that brings all the necessary infrastructure into 
Polly. When the infrastructure is read and proven to work, you can start
to implement (and evaluate) more complex code generation strategies.

> There are three key challenges in this project here.
> 1. How to get the optimal execution configure of GPU codes.
> The execution configure is essential to the performance of the GPU
> codes. It is limited by many factors, including hardware, source codes,
> register usage, local store (device) usage, original memory access
> patterns and so on. We must take all the staff into consideration.

Yes and no. Don't try to solve everything withing 3 months. Rather try 
to limit yourself to some very simple but certainly achievable goals.
I would probably go either with a very simple

> 2. How to automatically insert the synchronization codes.
> This is very important to preserve the original semantics. We must
> detect where we need insert them correctly.

Again, distinguish here between the infrastructure of adding 
synchronizations and the algorithm to derive optimal synchronizations.

> 3. How to automatically generate the memory copy operation between host
> and device.
> We must transport the input data to GPU and copy the
> results back. Fortunately, Polly has implemented a very expressive way
> to describe memory access.

In general, I think in general it may be helpful to have some examples 
that where you show what you want to do.

> *Timeline*
> May 21 ~ June 3 preliminary code generation for 1-d and 2d parallel loops.
> June 4 ~ June 11 code generation for parallel loops with non-parallel
> inner loops.
> June 11 ~ June 24 automatic memory copy insertions.
> June 25 ~ July 8 auto-tuning for GPU execution configuration.
What do you mean by auto-tuning? What do you want to tune?

For me it does not seem to be essential.

Due to the short time of a GSoC I would suggest to just require the user 
to define such values and give a little bit more time to the other
features. You can put it into a nice to have list, where you put ideas 
that can be implemented after having fulfilled the success criteria.

> July 9 ~ July 15 Midterm evaluation and writing documents.
> July 16 ~ July 22 automatic synchronization insertion.
> July 23 ~ August 3 test on polybench benchmarks.
> August 4 ~ August 12 summarize and complete the final documents.

An additional list with details for the individual steps would be good.

When are you planning to add what infrastructure. You may also add 
example codes.

> *Project experience*
> I participated in several projects related to binary translation
> (optimization) and run-time system. And I implemented a frontend for
> numerical computing languages like octave/matlab, following the style of
> clang. Recently, I work very close with Polly team to contribute some
> patches and investigate lots of details about polyhedral transformation.

You may add links to the corresponding commit messages.

> *References*
> 1. Tobias Grosser, Ragesh A. /Polly - First Successful Optimizations -
> How to proceed?/ LLVM Developer Meeting 2011.
> 2. Muthu Manikandan Baskaran, J. Ramanujam and P.
> Sadayappan.///Automatic C-to-CUDA Code Generation for Affine Programs/.
> CC 2010.
> 3. Soufiane Baghdadi, Armin Größlinger, and Albert Cohen. /Putting
> Automatic Polyhedral Compilation for GPGPU to Work/. In Proc. of
> Compilers for Parallel Computers (CPC), 2010.

You are adding references, but don't reference them in your text. Is 
this intentional?

Overall, this looks interesting. Looking forward to your final submission.

Tobi

P.S. Feel free to post again to get further comments.



More information about the llvm-dev mailing list