[llvm-dev] [GSoC] Project Proposal: Parallel extensions for llvm analysis and transform framework

Mehdi Amini via llvm-dev llvm-dev at lists.llvm.org
Thu Mar 16 11:31:54 PDT 2017


Hey,

I think it is a great idea!

On Mar 16, 2017, at 9:06 AM, Kareem Ergawy via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> Hello,
> 
> Below is a proposal for a GSoC project that I would like to work on this year. Your input and feedback is much appreciated.
> 
> Background:
> =========
> 
> My name is Kareem Ergawy and I currently work as part of the PIR project. PIR is an extension of the IR to support fork-join parallelism that is currently under review [1, 2, 3, 4].
> 
> Goals:
> =====
> 
> As a GSoC project, here I propose an extension for llvm analyses and transformations to support parallel IR. Such extension should serve as a foundation for an optimization framework for parallel regions. I would like to emphasize that, in this proposal, I am trying propose ideas that satisfy the following conditions:
> 
> (1) Not specific to PIR but rather can be adapted to other explicit parallel representations in the IR. For example, see [5] for Hal Finkel's and Xinmin Tian's proposal for region annotations.

Having both Hal and Johannes as mentor would be very nice for this project indeed.
I’ll be happy to help as well if needed, but I feel Hal and Johannes would do a great job here if they’re willing to work on this.

— 
Mehdi





> 
> (2) Can be extended upon in the future. Hopefully, this will be the start of a parallel optimization pipeline that integrates nicely with llvm's sequential pipeline.
> 
> Proposed passes:
> =============
> 
> Here I list a few passes that I think will help kicking off such a parallel-centric optimization pipeline.
> 
> 1 - Execution partial order analysis:
> ----------------------------------------------
> 
> Yuki et al. [6] characterize 2 parallel relations, namely: "Happens Before" and "May Happen in Parallel". As the names imply, these 2 relations can be used to calculate a partial ordering over statements. I believe such analysis can be used as a useful building block for later analyses and optimizations. For example, it can reduce the constraints induced by the sequential order of statements in the IR of a parallel program. Lifting such constraints can provide better optimization chances even for already existing sequential passes. For an example of using such pass, see point 3 below.
> 
> 2 - Redundant barrier removal:
> ----------------------------------------
> 
> Satoh et al. [7] developed a memory synchronization analysis algorithm. At each synchronization point n, the algorithm calculates 2 sets:
> 
> (1) WSync(n): is the set of definitions/writes that may need to be synchronized at n.
> (2) RSync(n): is the set of uses/reads that may need to be synchronized at n.
> 
> If intersect(WSync(n), RSync(n)) = {}, then no inter-thread flow dependencies require barrier synchronization. Removing such barriers will mitigate the associated runtime overhead.
> 
> 3 - Adapt polyhedral transformations for parallel contexts:
> --------------------------------------------------------------------------
> 
> Chatarasi et al. [8] build on the "Happens Before" relation introduced in the the first point above to enable larger sets of polyhedral transformations. In particular, the complement of the "Happens Before" relation is used to mitigate conservative dependences. The reduced set of dependences can then be passed to a polyhedral transformation tool. I guess such idea can be a good addition to Polly since it enables faster and more meaningful transformations of parallel programs.
> 
> Other passes:
> -------------------
> 
> The 3 suggestions above provide a taste for what can be an analysis framework for parallel programs in IR. There are other examples in the literature that can be added as well. Also, new ones can be designed. I will try to implement as much as I can for the time period of GSoC. But the good thing about this project is its incremental nature. We have a set of self-contained measurable steps.
> 
> Implementation:
> ============
> 
> I propose to implement such passes on top of PIR. Being currently active in development, I guess it can be a good vehicle to experiment with the idea of having parallel centric optimization pipeline.
> 
> More about my background and commitments this Summer:
> ===========================================
> 
> Currently I am a CS master’s students at Saarland University where I work on the practical aspects of the PIR project. Also, over the past year I have been working on the design and implementation of a parallel primitives library called libpp (not public yet) as part of the AnyDSL compiler framework project [8]. The library is implemented in the research language Impala.
> 
> During the coming Summer, I shouldn’t have any commitments other than mostly PIR and libpp for but only a few hours every week. This proposal integrates well with PIR and hopefully you will find it of general interest for the LLVM project. I would like to emphasize the idea that I am aiming at a parallel-centric optimization pipeline through this proposal and PIR is a nice vehicle to experiment with that.
> 
> Final notes:
> =========
> 
> Johannes Doerfert suggested me the general idea of that project and also provided very useful guidelines for writing the proposal.
> 
> Thank you for reading this proposal and hopefully you will find it useful.
> 
> [1] "[llvm-dev] [RFC][PIR] Parallel LLVM IR -- Stage 0 -- IR extension", http://lists.llvm.org/pipermail/llvm-dev/2017-January/109615.html <http://lists.llvm.org/pipermail/llvm-dev/2017-January/109615.html>
> [2] https://reviews.llvm.org/D29250 <https://reviews.llvm.org/D29250>
> [3] https://reviews.llvm.org/D30353 <https://reviews.llvm.org/D30353>
> [4] https://reviews.llvm.org/D30354 <https://reviews.llvm.org/D30354>
> [5] "[llvm-dev] [RFC] IR-level Region Annotations", http://lists.llvm.org/pipermail/llvm-dev/2017-January/108906.html <http://lists.llvm.org/pipermail/llvm-dev/2017-January/108906.html>
> [6] Chatarasi, Prasanth, Jun Shirako, and Vivek Sarkar. "Polyhedral transformations of explicitly parallel programs." 5th International Workshop on Polyhedral Compilation Techniques (IMPACT), Amsterdam, Netherlands. 2015.
> [7] Satoh, Shigehisa, Kazuhiro Kusano, and Mitsuhisa Sato. "Compiler optimization techniques for OpenMP programs." Scientific Programming 9.2, 3 (2001): 131-142.
> [8] https://anydsl.github.io/ <https://anydsl.github.io/>
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170316/98081f23/attachment.html>


More information about the llvm-dev mailing list