[LLVMdev] [RFC] OpenMP Representation in LLVM IR

dag at cray.com dag at cray.com
Tue Oct 2 12:25:35 PDT 2012

Hal Finkel <hfinkel at anl.gov> writes:

> OpenMP provides a mechanism to expresses parallel semantics, and the
> best way to implement those semantics is highly target-dependent. On
> some targets early lowering into a runtime library will perform well,
> and optimization opportunities lost by doing so will prove fairly
> insignificant in many cases. I can believe that this is true on those
> systems that Cray targets. However, that will not be true everywhere. 

Granted.  However, it's still not clear to me that LLVM IR is the right
level for this.  I'm not going to oppose the idea or anything, I'm just
expressing thoughts.

>> A higher-level IR would be more appropriate for this, either something
>> provided by Clang or another frontend or a some other mid-level IR.
> For some things, yes, but at the moment we don't have anything else
> besides the LLVM IR. 

That's simply because no one has addressed the issue.  Obviously, it
would be a lot of work to develop a new IR but the long-term benefits
may be worth it.

> The LLVM IR is currently where vectorization is done,
> loop-restructuring is done, aliasing analysis is performed, etc.  and
> so is where the parallelization should be done as well.

I made a presentation at the llvmdev conference some years ago in which
I (briefly) argued that LLVM IR is not the best fit for this stuff.  I
still believe that.  Obviously it _can_ be done because LLVM IR is
Turing-complete.  But would it be easier/more effective to do it with a
different IR representation?  I believe so.  'Course I don't expect any
of you to just take my word for it.  :)

> Even if LLVM has support for parallelization, no customer is required
> to use it. If you'd like to lower parallelization semantics into
> runtime calls before lowering in LLVM, you're free to do that.

Yep, and we'll continue doing that.  I'm not objecting because the
proposal will hurt us in some way.  I'm not even objecting, really, just
pointing out alternatives for thought.

> Nevertheless, LLVM is where, for example, loop-invariant code motion is
> performed. We don't want to procedurize parallel loops before that
> happens.

Yes, there are benefits to delaying outlining to allow other passes to
run.  I do understand the whys of the various proposals.


More information about the llvm-dev mailing list