[Openmp-dev] [llvm-dev] [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries

Mehdi Amini via Openmp-dev openmp-dev at lists.llvm.org
Fri Apr 22 16:08:54 PDT 2016


> On Apr 22, 2016, at 3:24 PM, Chandler Carruth <chandlerc at gmail.com> wrote:
> 
> On Fri, Apr 22, 2016 at 3:05 PM Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote:
> 
> > On Apr 22, 2016, at 3:01 PM, Chandler Carruth <chandlerc at gmail.com <mailto:chandlerc at gmail.com>> wrote:
> >
> > I feel like this thread got a bit stalled. I'd like to pick it up and try to suggest a path forward.
> >
> > I don't hear any real objections to the overall idea of having an LLVM subproject for parallelism runtimes and support libraries. I think we should get that created.
> 
> I think it should be clarified if "parallelism runtimes and support libraries" are intended to expose user-level APIs or if these are intended to expose APIs for the compiler generated code (this may be part of your point about "writing up its charter, scope" but I also think it shouldn't be underestimated as a task so I called it out).
> 
> Absolutely. I think that needs to be clearly spelled out.
> 
> Personally, I'd like to see the subproject open to *both*. Here are some libraries I would love to see (but don't necessarily have concrete plans around):
> - A nice vectorized math library
> - Linear algebra libraries like BLAS implementations or such
> - Highly tuned FFT or other domain specific libraries for GPUs. Essentially the same is the vectorized math libraries but for GPUs and slightly higher level.
> - Stream executor 
> - Any generic components of the OpenMP libraries.
> 
> Clearly each of these would need to be discussed on a case by case basis, but there seems to be a healthy mixture of both user-level APIs and compiler-level APIs. I would suggest criteria for being here along the lines of:
> 
> - Includes compiler-targeted APIs (maybe in addition to user-level APIs, maybe even with overlap), or
> - Leverages compiler details for its implementation (for example, using vector extensions we know LLVM supports), or
> - Wants to use compiler-specific packaging techniques or other integration techniques (for example shipping as bitcode), or
> - Helps support compiler or programming language functionality
> 
> The first three here seem clear cut to me. If any part of the library is intended to be callable by the compiler, its a good fit. SE has such interfaces. Vectorized math libraries do too, etc. If the implementation of th elibrary really wants to use compiler internals like our vector math extensions, again, I think it makes sense to keep it reasonably co-located with the compiler.
> 
> The last seems a bit tricky, but I think its really important. Currently, CUDA provides a pretty big programming surface, and having a well tuned BLAS or FFT implementation for example that integrates with CUDA is pretty important. Similarly in the future, we expect C++ to get lots of parallel standard library interfaces, potentially even BLAS-looking ones and we might want a good parallel BLAS implementation or other very fundamental parallel library implementation to use when implementing it.
> 
> But at the same time, I think its really important to have a clear place where any library here ties back into the compiler ecosystem and/or the programming language ecosystem that are the core of LLVM.
> 
> Does this seem like its going in the right direction?

Yes.
I just think we need to be careful about having clear layering/decoupling between the various pieces of the libraries I think (I'm not sure if low-level/high-level is the right distinction for instance, it would require some thoughts), but the LLVM community is usually pretty good a this (even if the recent "discussions" around lld indicated it is not always a given).

-- 
Mehdi



> (Jason can probably take on the non-trivial task of writing this up more formally and make sure it is clearly documented.)
> 
> 
> Otherwise you plan sounds good to me.
> 
> --
> Mehdi
> 
> 
> 
> >
> > I don't actually see any real objections to StreamExecutor being one of the runtimes. There are some interesting questions however:
> > - Is there common code in the OpenMP runtime that could be unified with this?
> > - Could OpenMP end up using SE or some common shared library between them as a basis for offloading?
> > - Would instead it make more sense to have the OpenMP offload library be a plugin for StreamExecutor?
> >
> > I don't know the answer to any of these really, but I also don't think that they should prevent us from making progress here. And I think if anything, they'll become easier to answer if we do.
> >
> > So my suggestion would be:
> > 1) Create the broader scoped LLVM subproject, including writing up its charter, scope, plans, etc.
> >
> > 2) Add stream executor to it
> >
> > 3) Initially, leave the OpenMP offloading stuff targeted at OpenMP. Then, as it evolves, consider moving it to be another runtime in the broad project if and when it makes sense.
> >
> > 4) As both OpenMP and SE evolve and are used some in the project, evaluate whether there is a common core that makes sense to extract. If so, do it and rebase them appropriately.
> >
> >
> > Does this make sense? Are there objections to moving forward here?
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/openmp-dev/attachments/20160422/1cb50611/attachment.html>


More information about the Openmp-dev mailing list