[PATCH] D28975: LV: Introducing VPlan to model the vectorized code and drive its transformation

Ayal Zaks via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jan 20 17:17:01 PST 2017


Ayal created this revision.
Herald added subscribers: mgorny, sanjoy.

This patch follows our RFC[1]  and presentation at the Dev Meeting[2]. Namely, it starts to address the proposal stated there:

> Proposal: introduce the Vectorization Plan as an explicit model of a vectorization candidate and update the overall flow

according to the first step expressed:

> The first patches we're working on are designed to have the innermost Loop Vectorizer explicitly model the control flow of its vectorized loop.

This implementation is designed to show key aspects of the VPlan model, demonstrating how it can capture precisely *all* vectorization decisions taken inside a to-be vectorized loop by the current Loop Vectorizer, and carry them out. It is therefore practically an NFC patch, with slight disclaimers listed below. The VPlan model implemented strives to be compact , addressing compile-time concerns. More technical details are documented in the rst file attached. The patch can be broken down into several hunks for incremental landing; a tentative break-down list is provided below.

Thanks to the Intel vectorization team for this joint effort,
Gil and Ayal.

Deviation from current functionality:

- Debug printout of “LV: Scalarizing [and predicating]: <inst>” – VPlan carries out these decisions before Cost-Model’s printouts, unlike current behavior.
- Placement of extracts moved to basic-block of users: at start of this basic-block vs. before first user; the distinct orders should be insignificant, subject to scheduling.
- Redundant basic-blocks/phi’s – these are insignificant, subject to subsequent clean-up.

Tentative break-down; some tasks refactor or fix current LV, some introduce parts of VPlan:

- refactor Cost-Model to provide MaxVF and early-exit methods.
- refactor ILV to provide vectorizeInstruction, getScalarValue, getVectorValue, widenIntInduction, buildScalarSteps, PHIsToFix/fixCrossIterationPHIs, and possibly additional methods.
- fix Unroller’s getScalarValue() to reuse ILV’s refactored getScalarValue(Part, Lane) which also sets metadata. Will simplify this patch.
- unify the GEP reuse behavior between a vectorized wide load/store, and the wide load/store of an interleave group. Will simplify this patch.
- have LV avoid creating redundant basic-blocks. Will help this patch be fully NFC.
- have LV cache basic-block masks and reuse them. Will help this patch be fully NFC.
- build initial VPlans and print them for debugging
- convert ILV.vectorize to use LVP.executeBestPlan, keeping sinkScalarOperands() as a non-VPlan post-processing method.
- optimize VPlans by introducing sinkScalarOperands() and print them for debugging
- use VPlan’s sinkScalarOperands() instead of the non-VPlan version

[1] RFC <http://lists.llvm.org/pipermail/llvm-dev/2016-September/105057.html> 
[2] Extending LoopVectorizer towards supporting OpenMP4.5 SIMD and outer loop auto-vectorization, 2016 LLVM Developers' Meeting <https://www.youtube.com/watch?v=XXAvdUwO7kQ>


https://reviews.llvm.org/D28975

Files:
  docs/VPlanPrinter.png
  docs/VPlanRecipesUML.png
  docs/VPlanUML.png
  docs/VectorizationPlan.rst
  docs/Vectorizers.rst
  lib/Transforms/Vectorize/CMakeLists.txt
  lib/Transforms/Vectorize/LoopVectorize.cpp
  lib/Transforms/Vectorize/VPlan.cpp
  lib/Transforms/Vectorize/VPlan.h
  test/Transforms/LoopVectorize/AArch64/aarch64-predication.ll
  test/Transforms/LoopVectorize/AArch64/predication_costs.ll
  test/Transforms/LoopVectorize/if-pred-non-void.ll
  test/Transforms/LoopVectorize/induction.ll

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D28975.85215.patch
Type: text/x-patch
Size: 261752 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170121/78af2cef/attachment-0001.bin>


More information about the llvm-commits mailing list