[llvm-dev] Inclusion of Polly and isl into core LLVM

Tobias Grosser via llvm-dev llvm-dev at lists.llvm.org
Mon Jan 15 13:44:45 PST 2018

[add subject]

Dear LLVM community,

hope all of you had a good start into 2018 and a quiet branching of LLVM 6.0.

With the latest LLVM release out of the way and a longer development phase starting, we would like to restart the process of including Polly and isl into core LLVM to bring changes in early on before the next LLVM release.

Short summary:

 * Today Polly is already part of each LLVM release (and will be shipping with LLVM 6.0) for everybody to try, with conservative defaults.
 * We proposed to include Polly and isl into LLVM to provide modern high-level loop optimizations into LLVM
 * We suggested to develop Polly and isl as part of core LLVM to make interactions with the core LLVM community easier and to allow us to better integrate Polly with the new pass manager.

Let me briefly summarize the current status:

 * Michael sent out an official email to discuss how to best include isl into LLVM
 * We sent out the LLVM developers meeting notes (_http://lists.llvm.org/pipermail/llvm-dev/2018-January/120419.html_)
 * Philip Pfaffe prepared a preliminary patch set for integrating Polly closer into LLVM:
  (further cleanup needed)
 * We are working further with ARM (Florian Hahn and Francesco) to upstream the inliner changes needed for the end-to-end optimization of SPEC 2006 libquantum.   _https://reviews.llvm.org/D38585_
 * Oleksandr, Sven and Manasij Mukherjee started to look into spatial locality
 * We worked on expanding the isl C++ bindings (_http://repo.or.cz/isl.git/shortlog_). While a first set of patches is already open, further patches will follow over the next couple of weeks.

Let me briefly summarize the LLVM developer meeting comments on our proposal (subjective summary)

 * Most people were interested in having polyhedral loop optimizations being part of LLVM.
 * Ideas of uses of isl beyond polyhedral loop scheduling were raised (e.g., for polyhedral value analysis, dependence analysis, or broader assumption tracking). Others were interested in the use of polyhedral loop optimization with “learned” scheduling strategies.
 * Specific concerns were raised that an integration of Polly into LLVM may be an implicit choice of LLVMs loop optimization future. This is not the case. While Polly is today the only end-to-end high-level loop optimization, other approaches can and should explored (e.g., can there be synergies with the region vectorizer?)
 * How stable/fast/… is Polly today
   * We build all of AOSP with rather restrictive compile-time limits
   * Bootstrapping time of clang is regressed by 6% (at most)
   * Removal of scalar dependences is today very generic and must be sped up in the future
   * Polly still shows up at the top of the middle-end, but larger compile time regressions are often due to increased code size (and the LLVM backend)
   * We see non-trivial speedups for hmmer, libquantum, and various linear-algebra kernels (we use gemm-specific optimizations). The first two require additional flags to be enabled.

The precise inclusion agenda has been presented here:


After having merged communities, I suggest to form a loop optimization working group which jointly discusses how LLVM’s loop optimizations should evolve.

I would like to invite comments regarding this proposal.
Are there any specific concerns we should address before requesting the initial svn move?


More information about the llvm-dev mailing list