[llvm-dev] [RFC] Polly Status and Integration

Hal Finkel via llvm-dev llvm-dev at lists.llvm.org
Fri Sep 1 11:47:57 PDT 2017


**

*Hi everyone,As you may know, stock LLVM does not provide the kind of 
advanced loop transformations necessary to provide good performance on 
many applications. LLVM's Polly project provides many of the required 
capabilities, including loop transformations such as fission, fusion, 
skewing, blocking/tiling, and interchange, all powered by 
state-of-the-art dependence analysis. Polly also provides automated 
parallelization and targeting of GPUs and other**accelerators.*

*

Over the past year, Polly’s development has focused on robustness, 
correctness, and closer integration with LLVM. To highlight a few 
accomplishments:


  *

    Polly now runs, by default, in the conceptually-proper place in
    LLVM’s pass pipeline (just before the loop vectorizer). Importantly,
    this means that its loop transformations are performed after
    inlining and other canonicalization, greatly increasing its
    robustness, and enabling its use on C++ code (where [] is often a
    function call before inlining).

  *

    Polly’s cost-modeling parameters, such as those describing the
    target’s memory hierarchy, are being integrated with
    TargetTransformInfo. This allows targets to properly override the
    modeling parameters and allows reuse of these parameters by other
    clients.

  *

    Polly’s method of handling signed division/remainder operations,
    which worked around lack of support in ScalarEvolution, is being
    replaced thanks to improvements being contributed to ScalarEvolution
    itself (see D34598). Polly’s core delinearization routines have long
    been a part of LLVM itself.

  *

    PolyhedralInfo, which exposes a subset of Polly’s loop analysis for
    use by other clients, is now available.

  *

    Polly is now part of the LLVM release process and is being included
    with LLVM by various packagers (e.g., Debian).


I believe that the LLVM community would benefit from beginning the 
process of integrating Polly with LLVM itself and continuing its 
development as part of our main code base. This will:

  *

    Allow for wider adoption of LLVM within communities relying on
    advanced loop transformations.

  *

    Provide for better community feedback on, and testing of, the code
    developed (although the story in this regard is already fairly solid).

  *

    Better motivate targets to provide accurate, comprehensive, modeling
    parameters for use by advanced loop transformations.

  *

    Perhaps most importantly, this will allow us to develop and tune the
    rest of the optimizer assuming that Polly’s capabilities are present
    (the underlying analysis, and eventually, the transformations
    themselves).


The largest issue on which community consensus is required, in order to 
move forward at all, is what to do with isl. isl, the Integer Set 
Library, provides core functionality on which Polly depends. It is a C 
library, and while some Polly/LLVM developers are also isl developers, 
it has a large user community outside of LLVM/Polly. A C++ interface was 
recently added, and Polly is transitioning to use the C++ interface. 
Nevertheless, options here include rewriting the needed functionality, 
forking isl and transitioning our fork toward LLVM coding conventions 
(and data structures) over time, and incorporating isl more-or-less 
as-is to avoid partitioning its development.


That having been said, isl is internally modular, and regardless of the 
overall integration strategy, the Polly developers anticipate 
specializing, or even replacing, some of these components with 
LLVM-specific solutions. This is especially true for anything that 
touches performance-related heuristics and modeling. LLVM-specific, or 
even target-specific, loop schedulers may be developed as well.


Even though some developers in the LLVM community already have a 
background in polyhedral-modeling techniques, the Polly developers have 
developed, and are still developing, extensive tutorials on this topic 
http://pollylabs.org/education.htmland especially 
http://playground.pollylabs.org.


Finally, let me highlight a few ongoing development efforts in Polly 
that are potentially relevant to this discussion. Polly’s loop analysis 
is sound and technically superior to what’s in LLVM currently (i.e. in 
LoopAccessAnalysis and DependenceAnalysis). There are, however, two 
known reasons why Polly’s transformations could not yet be enabled by 
default:

  *

    A correctness issue: Currently, Polly assumes that 64 bits is large
    enough for all new loop-induction variables and index expressions.
    In rare cases, transformations could be performed where more bits
    are required. Preconditions need to be generated preventing this
    (e.g., D35471).

  *

    A performance issue: Polly currently models temporal locality (i.e.,
    it tries to get better reuse in time), but does not model spatial
    locality (i.e., it does not model cache-line reuse). As a result, it
    can sometimes introduce performance regressions. Polly Labs is
    currently working on integrating spatial locality modeling into the
    loop optimization model.

Polly can already split apart basic blocks in order to implement loop 
fusion. Heuristics to choose at which granularity are still being 
implemented (e.g., PR12402).

I believe that we can now develop a concrete plan for moving 
state-of-the-art loop optimizations, based on the technology in the 
Polly project, into LLVM. Doing so will enable LLVM to be competitive 
with proprietary compilers in high-performance computing, machine 
learning, and other important application domains. I'd like community 
feedback on what**should be part of that plan.


Sincerely,

Hal (on behalf of myself, Tobias Grosser, and Michael Kruse, with 
feedback from**several other active Polly developers)


We thank the numerous people who have contributed to the Polly 
infrastructure:Alexandre Isoard, Andreas Simbuerger, Andy Gibbs, Annanay 
Agarwal, ArminGroesslinger, Ajith Pandel, Baranidharan Mohan, Benjamin 
Kramer, BillWendling, Chandler Carruth, Craig Topper, Chris Jenneisch, 
ChristianBielert, Daniel Dunbar, Daniel Jasper, David Blaikie, David 
Peixotto,Dmitry N. Mikushin, Duncan P. N. Exon Smith, Eli Friedman, 
EugeneZelenko, George Burgess IV, Hans Wennborg, Hongbin Zheng, Huihui 
Zhang,Jakub Kuderski, Johannes Doerfert, Justin Bogner, Karthik Senthil, 
LoganChien, Lawrence Hu, Mandeep Singh Grang, Matt Arsenault, 
MatthewSimpson, Mehdi Amini, Micah Villmow, Michael Kruse, Matthias 
Reisinger,Maximilian Falkenstein, Nakamura Takumi, Nandini Singhal, 
NicolasBonfante, Patrik Hägglund, Paul Robinson, Philip Pfaffe, Philipp 
Schaad,Peter Conn, Pratik Bhatu, Rafael Espindola, Raghesh Aloor, 
ReidKleckner, Roal Jordans, Richard Membarth, Roman Gareev, 
SaleemAbdulrasool, Sameer Sahasrabuddhe, Sanjoy Das, Sameer AbuAsal, 
SamNovak, Sebastian Pop, Siddharth Bhat, Singapuram Sanjay 
Srivallabh,Sumanth Gundapaneni, Sunil Srivastava, Sylvestre Ledru, Star 
Tan, TanyaLattner, Tim Shen, Tarun Ranjendran, Theodoros Theodoridis, 
Utpal Bora,Wei Mi, Weiming Zhao, and Yabin Hu.*

-- 
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170901/b5ef5cdb/attachment-0001.html>


More information about the llvm-dev mailing list