[LLVMdev] [RFC] OpenMP Representation in LLVM IR
hfinkel at anl.gov
Mon Oct 1 21:03:29 PDT 2012
On Mon, 01 Oct 2012 20:06:20 -0500
greened at obbligato.org wrote:
> Andrey Bokhanko <andreybokhanko at gmail.com> writes:
> > Hi All,
> > We'd like to make a proposal for OpenMP representation in LLVM IR.
> I'm providing some brief comments after a skim of this..
> > Our goal is to reach an agreement in the community on a simple,
> > complete and extensible representation of OpenMP language constructs
> > in LLVM IR.
> I think this is a bad idea. OpenMP is not Low Level and I can't think
> of a good reason to start putting OpenMP support in the IR. Cray has
> complete functioning OpenMP stack that performs very well without any
> LLVM IR support at all.
OpenMP provides a mechanism to expresses parallel semantics, and the
best way to implement those semantics is highly target-dependent. On
some targets early lowering into a runtime library will perform well,
and optimization opportunities lost by doing so will prove fairly
insignificant in many cases. I can believe that this is true on those
systems that Cray targets. However, that will not be true everywhere.
> > As can be seen in the following sections, the IR extension we
> > propose doesn’t involve explicit procedurization. Thus, we assume
> > that function outlining should happen somewhere in the LLVM
> > back-end, and usually this should be aligned with how chosen OpenMP
> > runtime library works and what it expects. This is a deliberate
> > decision on our part. We believe it provides the following benefits
> > (when compared with designs involving procedurization done in a
> > front-end):
> This is a very high-level transformation. I don't think it belongs
> in a low-level backend.
> > 1) Function outlining doesn’t depend on source language; thus, it
> > can be implemented once and used with any front-ends.
> A higher-level IR would be more appropriate for this, either something
> provided by Clang or another frontend or a some other mid-level IR.
For some things, yes, but at the moment we don't have anything else
besides the LLVM IR. The LLVM IR is currently where vectorization is
done, loop-restructuring is done, aliasing analysis is performed, etc.
and so is where the parallelization should be done as well. For other
things, like atomics, lowering later may be better.
> > 2) Optimizations are usually restricted by a single function
> > boundary. If procedurization is done in a front-end, this
> > effectively kills any optimizations – as simple ones as loop
> > invariant code motion. Refer to [Tian2005] for more information on
> > why this is important for efficient optimization of OpenMP programs.
> You're assuming all optimization is done by LLVM. That's not true in
Even if LLVM has support for parallelization, no customer is required
to use it. If you'd like to lower parallelization semantics into
runtime calls before lowering in LLVM, you're free to do that.
Nevertheless, LLVM is where, for example, loop-invariant code motion is
performed. We don't want to procedurize parallel loops before that
> > It should be stressed, though, that in order to preserve correct
> > semantics of a user program, optimizations should be made
> > thread-aware (which, to the best of our knowledge, is not the case
> > with LLVM optimizations).
> Another reason not to do this in LLVM.
> > We also included a set of requirements for front-ends and back-ends,
> > which establish mutual expectations and is an important addition to
> > the design.
> This will increase coupling between the "front ends" and LLVM. That
> would be very unfortunate. One of LLVM's great strength is its
Most users do not use every feature of a programming language, and LLVM
is no different.
> I didn't look at the details of how you map OMP directives to LLVM IR.
> I think this is really the wrong way to go.
Respectfully, I disagree.
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
Leadership Computing Facility
Argonne National Laboratory
More information about the llvm-dev