[LLVMdev] [llvm] r184698 - Add a flag to defer vectorization into a phase after the inliner and its

Hal Finkel hfinkel at anl.gov
Mon Jun 24 16:05:00 PDT 2013


----- Original Message -----
> On Jun 24, 2013, at 3:09 PM, Chandler Carruth <chandlerc at gmail.com>
> wrote:
> > The inliner, GVN, and the loop passes run together, *iteratively*.
> > They are neither before or after one another. And this is
> > important as it allows iterative simplification in the inliner. It
> > is one of the most critical optimizations for C++ code that LLVM
> > does.
> > 
> > We can't sink all of the loop passes out of the iterative pass
> > model either, because deleting loops, simplifying them, etc. all
> > directly feed the iterative simplification needed by GVN and the
> > inliner.
> > 
> > We need a *second* loop pass that happens after the iterative CGSCC
> > walk which does the further optimizations such as (potentially
> > indvars, ) the vectorizers, LSR, lower-switch, CGP, CG. I think we
> > actually want most of the post CGSCC module passes to run after
> > the vectorizers and before LSR to fold away constants and globals
> > that look different after vectorization compared to before, but
> > aren't significantly shifted by LSR and CGP.
> 
> In terms of mental model, is it best to think of vectorization as
> being a loop pass or as a late lowering pass?
> 
> What about when we get more aggressive loop transformations like
> blocking, strip mining, fusion, etc?

I view vectorization as a lowering pass, but it changes the cost of the code (both in terms of size and runtime), and so the inliner should be aware of its effects. Of course, the inliner is already trying to guess properties of the code that will be generated for some given IR, and vectorization only makes this process potentially more difficult. Currently, we deal with this by running the vectorizer prior to inlining, but that is obviously not the only way.

Fusion could be considered a canonicalization pass, but blocking, strip mining, etc. are also lowering operations (and, furthermore, lowering operations that need to be vectorization-aware).

In terms of categorization, there seem to be (at least) two relevant factors: canonicalization vs. target-awareness (lowering) and also information creating/destroying/preserving. The inliner is information creating (except, for example, when we lose no-alias function argument information, but that is an infrastructure limitation), the vectorizer is information destroying (mainly an infrastructure limitation in SCEV, BasicAA, etc., but currently true regardless).

 -Hal

> 
> -Chris
> 
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-dev mailing list