[LLVMdev] OpenMP support for LLVM

Hal Finkel hfinkel at anl.gov
Mon Jan 16 08:47:08 PST 2012


On Mon, 2012-01-16 at 17:14 +0100, Tobias Grosser wrote:
> On 01/16/2012 03:04 AM, Vlad Krylov wrote:
> > I am interested. I would be grateful for your hints.
> 
> Great. ;-)
> 
> > So OpenMP has various constructs such as parallel, barrier, single,
> > for, etc. And there is at least two libraries to generate OpenMP code:
> > libgomp and mpc. We want to be independent of specific library.
> 
> True.
> 
> > We should create an interface with methods which present manual
> > inserting of OpenMP pragmas in C or Fortran code. These sounds like
> > "this code block should be run with #parallel" or "this for loop
> > should be run with #parallel for loop". Constructs like barrier,
> > atomic can be simply presented by methods 'barrier' and 'atomic'. But
> > parallel or for constructs allows many modifiers (schedule, private,
> > shared, etc). Сan not devise a good solution for these cases.
> 
> I think we should distinguish here how to represent the OpenMP 
> constructs in LLVM-IR and how to create such LLVM-IR.
> 
> > Guess this interface is close to IRBuilder. You suggested it will be
> > OpenMPBuilder. This mean IR will be created from scratch. Other way is
> > to provide methods to modify existing IR, like user says "I want this
> > loop to be parallel, do it for me". What you think about this?
> 
> I think both are valid methods to create LLVM-IR that uses OpenMP 
> extensions.
> 
> Alexandra and other people interested in automatic parallelization are 
> probably more interested in transforming a plain LLVM-IR loop into 
> LLVM-IR with OpenMP extensions. To add OpenMP annotation support to 
> clang, the clang code generation would probably be better off by 
> directly generating LLVM-IR with OpenMP constructs.
> 
> > Implementation is calls of OpenMP library functions in a way provided
> > by the library abi. For example, in mpc:
> >
> >     pragma omp for schedule(runtime)
> >     for ( i=lb ; i COND b ; i+=incr ) { A ; }
> >     ->
> >     if ( __mpcomp_runtime_loop_begin( ... ) ) {
> >       do {
> >         for ( ... ) {
> >           A ;
> >         }
> >       } while ( __mpcomp_runtime_loop_next( ... ) ) ;
> >     }
> >     __mpcomp_runtime_loop_end() ;
> >
> >
> > I think this work is not simple for me, so any comments are welcomed.
> 
> I have some ideas, but as this is a larger project, we should probably 
> get a wider audience to review such a proposal. (You may also want to 
> copy the mpc authors, who have some experience in how to add support for 
> OpenMP into a compiler IR.

FWIW, there is a BSD-licensed OpenMP 3 source-to-source translator
available as part of the ROSE project: http://www.rosecompiler.org/
It might be worth looking at.

> 
> Some ideas from my side:
> 
> How to represent OpenMP in LLVM?
> 
> I do not believe OpenMP needs direct LLVM-IR extensions, but should 
> rather be represented through a set of intrinsics. The intrinsics may be 
> similar to function calls generated with OpenMP SCHEDULE (runtime),
> but should be runtime library independent. They should also provide 
> enough information to optimize them within LLVM, in case the necessary 
> information is already available at compile time.
> 
> How to translate LLVM OpenMP intrinsics to actual code
> 
> Library specific passes can be used to lower the intrinsics to the 
> specific libgomp or mpc library calls. If already possible at compile 
> time, fast LLVM-IR sequences can be inlined instead of calling actual 
> library functions.

I agree, the ability to have target-specific lowering of the
parallelization constructs will be very important for some platforms. We
may need some new attributes in the IR so that we can still safely do
things like LICM.

 -Hal

> 
>   pragma omp for schedule(runtime)
>   for ( i=lb ; i COND b ; i+=incr ) { A ; }
> 
> -> Represented as LLVM-IR
> 
>   def original_function() {
>     llvm.openmp.loop(lower_bound, upper_bound, body, context, schedule)
>   }
> 
>   def body(i, context) {
>     A(i);
>   }
> 
> ->  Lowering to MPC: (for schedule = runtime)
> 
>   if ( __mpcomp_runtime_loop_begin( ... ) ) {
>     do {
>       for ( ... ) {
>         A ;
>       }
>     } while ( __mpcomp_runtime_loop_next( ... ) ) ;
>   }
>   __mpcomp_runtime_loop_end() ;
> 
> ->  Lowering to MPC: (for schedule = static)
> 
> Some fast LLVM-IR implementations of __mpc_static_loop_begin() may be 
> called and inlined.
> 
> How to generate the OpenMP intrinsics?
> 
> There are two ways:
> 
> * People implement the OpenMP frontend stuff in clang and clang can 
> directly emit the relevant intrinsics when generating LLVM-IR.
> 
> * An automatic parallelizer generates code by transforming existing 
> LLVM-IR. Here projects like Polly or Alexandra's tools come in.
> 
> Vlad:
> 
> If you interested to work on such a thing, I propose you start by 
> writing up a small proposal. This may contain the general architecture 
> of all this, a proposal of how to represent OpenMP at LLVM-IR level and
> everything else you think is important. You can than put this proposal 
> for discussion on the LLVM mailing list. I and hopefully several other 
> people will be glad to give our opinion.
> 
> Cheers
> Tobi
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

-- 
Hal Finkel
Postdoctoral Appointee
Leadership Computing Facility
Argonne National Laboratory




More information about the llvm-dev mailing list