[LLVMdev] [RFC] Parallelization metadata and intrinsics in LLVM

Hal Finkel hfinkel at anl.gov
Wed Aug 15 09:36:31 PDT 2012


On Wed, 15 Aug 2012 11:56:48 +0100
Renato Golin <rengolin at systemcall.org> wrote:

> On 15 August 2012 11:04, Raghavendra, Prakash
> <Prakash.Raghavendra at amd.com> wrote:
> > My idea is to capture parallelism with the way you have said using
> > ‘metadata’. I agree to record the parallel regions in the metadata
> > (as given by the user). However, we could also give placeholders to
> > record any additional information that the compiler writer needs
> > like number of threads, scheduling parameters, chunk size, etc etc
> > which are specific perhaps to OpenMP.
> 
> Hi Prakash,
> 
> I can't see the silver bullet you do. Different types of parallelism
> (thread/process/network/heterogeneous) have completely different
> assumptions, and the same keyword can mean different things, depending
> on the context. If you try to create a magic metadata that will cover
> from OpenCL to OpenMP to MPI, you'll end up having to have namespaces
> in metadata, which is the same as having N different types of
> metadata.
> 
> If there was a language that could encompass the pure meaning of
> parallelism (Oracle just failed building that), one could assume many
> things for each paradigm (OpenCL, mp, etc) much easier than trying to
> fit the already complex rules of C/C++ into the even more complex
> rules of target/vendor-dependent behaviour. OpenCL is supposed to be
> less of a problem in that, but the target is so different that I
> wouldn't try to merge OpenCL keywords with OpenMP ones.
> 
> True, you can do that with a handful of basic concepts, but the more
> obscure ones will break your leg. And you *will* have to implement
> them. Those of us unlucky enough to have to have implement bitfields,
> anonymous unions, volatile and C++ class layout know what I mean by
> that.
> 
> True, we're talking about the language-agnostic LLVM IR, but you have
> to remember that LLVM IR is build from real-world languages, and thus,
> full of front-end hacks and fiddles to tell the back end about the ABI
> decisions in a generic way.
> 
> I'm still not convinced there will be a lot of shared keywords between
> all parallel paradigms, ie. that you can take the same IR and compile
> to OpenCL, or OpenMP, or MPI, etc and it'll just work (and optimise).
> 

Renato,

To some extent, I'm not sure that the keywords are the largest problem,
but rather the runtime libraries. OpenMP, OpenACC, Cilk++, etc. all
have runtime libraries that provide functions that interact with the
respective syntax extensions. Allowing for that in combination with a
generic framework might be difficult.

That having been said, from the implementation side, there are
certainly commonalities that we should exploit. Basic changes to
optimization passes (loop iteration-space changes, LICM, etc.),
aliasing analysis, etc. will be necessary to support many kinds of
parallelism, and I think having generic support for that in LLVM is
highly preferable to specialized support for many different standards.
I think that there will be standard-specific semantics that will need
specific modeling, but we should share when possible (and I think that
a lot of the basic infrastructure can be shared).

 -Hal

-- 
Hal Finkel
Postdoctoral Appointee
Leadership Computing Facility
Argonne National Laboratory




More information about the llvm-dev mailing list