[llvm-dev] (no subject)
Nadav Rotem via llvm-dev
llvm-dev at lists.llvm.org
Sat Jan 20 10:47:07 PST 2018
I have some concerns about adding Polly into LLVM proper. I think that it's great that Polly is a part of the LLVM umbrella of projects, like Clang and LLDB. However, I am not convinced that Polly belongs in the LLVM compiler library. LLVM is a major dependency for so many external projects. Rust, Swift, GPU drivers by different vendors, and JIT compilers all rely on LLVM. Projects that depend on LLVM are going to pay the cost of adding Polly into the LLVM library. These projects operate under a different set of constraints and optimize different metrics. One of my main concerns is binary size. The size of the LLVM compiler library matters, especially on mobile, especially for JIT compilers. Growing the size of the LLVM binary increases the app load time (because the shared object needs to be read from disk). Moreover, the current size of the LLVM library prevents people from bundling a copy of LLVM with mobile apps because app stores limit the size of apps. Yes, I know that it's possible to disable Polly in production scenarios, but this looks like an unnecessary hurdle.
Would it be possible to use the LLVM plugin mechanism to load Polly dynamically into the clang pass manager instead of integrating it into the LLVM tree?
On Jan 15, 2018, at 01:33 PM, Tobias Grosser via llvm-dev <llvm-dev at lists.llvm.org> wrote:
Dear LLVM community,
hope all of you had a good start into 2018 and a quiet branching of LLVM 6.0.
With the latest LLVM release out of the way and a longer development phase starting, we would like to restart the process of including Polly and isl into core LLVM to bring changes in early on before the next LLVM release.
* Today Polly is already part of each LLVM release (and will be shipping with LLVM 6.0) for everybody to try, with conservative defaults.
* We proposed to include Polly and isl into LLVM to provide modern high-level loop optimizations into LLVM
* We suggested to develop Polly and isl as part of core LLVM to make interactions with the core LLVM community easier and to allow us to better integrate Polly with the new pass manager.
Let me briefly summarize the current status:
* Michael sent out an official email to discuss how to best include isl into LLVM
* We sent out the LLVM developers meeting notes (_http://lists.llvm.org/pipermail/llvm-dev/2018-January/120419.html_)
* Philip Pfaffe prepared a preliminary patch set for integrating Polly closer into LLVM:
(further cleanup needed)
* We are working further with ARM (Florian Hahn and Francesco) to upstream the inliner changes needed for the end-to-end optimization of SPEC 2006 libquantum. _https://reviews.llvm.org/D38585_
* Oleksandr, Sven and Manasij Mukherjee started to look into spatial locality
* We worked on expanding the isl C++ bindings (_http://repo.or.cz/isl.git/shortlog_). While a first set of patches is already open, further patches will follow over the next couple of weeks.
Let me briefly summarize the LLVM developer meeting comments on our proposal (subjective summary)
* Most people were interested in having polyhedral loop optimizations being part of LLVM.
* Ideas of uses of isl beyond polyhedral loop scheduling were raised (e.g., for polyhedral value analysis, dependence analysis, or broader assumption tracking). Others were interested in the use of polyhedral loop optimization with “learned” scheduling strategies.
* Specific concerns were raised that an integration of Polly into LLVM may be an implicit choice of LLVMs loop optimization future. This is not the case. While Polly is today the only end-to-end high-level loop optimization, other approaches can and should explored (e.g., can there be synergies with the region vectorizer?)
* How stable/fast/… is Polly today
* We build all of AOSP with rather restrictive compile-time limits
* Bootstrapping time of clang is regressed by 6% (at most)
* Removal of scalar dependences is today very generic and must be sped up in the future
* Polly still shows up at the top of the middle-end, but larger compile time regressions are often due to increased code size (and the LLVM backend)
* We see non-trivial speedups for hmmer, libquantum, and various linear-algebra kernels (we use gemm-specific optimizations). The first two require additional flags to be enabled.
The precise inclusion agenda has been presented here:
After having merged communities, I suggest to form a loop optimization working group which jointly discusses how LLVM’s loop optimizations should evolve.
I would like to invite comments regarding this proposal.
Are there any specific concerns we should address before requesting the initial svn move?
LLVM Developers mailing list
llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev