[llvm-dev] [Proposal][RFC] Epilog loop vectorization

Zaks, Ayal via llvm-dev llvm-dev at lists.llvm.org
Sun Feb 26 09:23:14 PST 2017

+1  for “just rerun the vectorizer” on the scalar remainder loop, as the proposed decision process is broken down into “first determine best VF for main loop, then determine best next EpilogVF for remainder loop” anyhow:

   const LoopVectorizationCostModel::VectorizationFactor EpilogVF =

Raising some aspects:

o The unroll factor may also affect the best EpilogVF. For instance, if UF=1 then EpilogVF < VF, as the patch currently enforces. But if UF is larger the next profitable EpilogVF could be equal to VF.

o The scalar loop serves two purposes, as determined by its two pre-headers: either as a default in case runtime dependence checks fail, or as a remainder loop in which case it is known to be vectorizable with trip count less-than VF*UF (or equal-to it*). Would be good to keep this in mind when rerunning.

(*) Note that if original loop requiresScalarEpilogue(), the trip count of the remainder loop may be equal to VF*UF, and/or the vectorized remainder loop may too require a scalar epilogue.

o It should be interesting to see how a single masked iteration that uses the original VF, say for UF=1, works. LV should hopefully already support most of what’s needed.

o The original Trip Count clearly affects the profitability of vectorizing the remainder loop. Would be good to leverage any information that can be derived about TC either statically or based on profiling, when determining EpilogVF. Potential speedups and overheads/slowdowns could possibly be demonstrated using extreme cases; what would the best and worst cases be? Perhaps TinyTripCountVectorThreshold is also affected?

o Finally, VPlan is currently modeling how to vectorize the loop body according to the potentially profitable VF’s. It’s modelling could be used to generate vector code for both body and remainder, and to consider their combined, overall cost.


From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of Hal Finkel via llvm-dev
Sent: Friday, February 24, 2017 00:14
To: Adam Nemet <anemet at apple.com>; Nema, Ashutosh <Ashutosh.Nema at amd.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>
Subject: Re: [llvm-dev] [Proposal][RFC] Epilog loop vectorization

On 02/22/2017 11:52 AM, Adam Nemet via llvm-dev wrote:
Hi Ashutosh,

On Feb 22, 2017, at 1:57 AM, Nema, Ashutosh via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:


This is a proposal about epilog loop vectorization.

Currently Loop Vectorizer inserts an epilogue loop for handling loops that don’t have known iteration counts.

The Loop Vectorizer supports loops with an unknown trip count, unknown trip count may not be a multiple of the vector width, and the vectorizer has to execute the last few iterations as scalar code. It keeps a scalar copy of the loop for the remaining iterations.

Loop with the large width has a high possibility of executing many scalar iterations.
i.e. i8 data type with 256bits target register can vectorize with vector width 32, with that maximum trip count possibility for scalar(epilog) loop is 31, which is significant & worth vectorizing.

Large vector factor has following challenges:
1)    Possibility of remainder iteration is substantial.
2)    Actual trip count at runtime is substantial but not meeting minimum trip count to execute vector loop.

These challenges can be addressed by mask instructions, but these instructions are limited and may not be available to all targets.

By epilog vectorization our aim to vectorize epilog loop where original loop is vectorized with large vector factor and has a high possibility of executing scalar iterations.

This require following changes:
1)    Costing: Preserve all profitable vector factor.
2)    Transform: Create an additional vector loop with next profitable vector factor.

Is this something that you propose to be on by default for wide VPU architectures without masking support? I.e. how widely is this applicable?   If not then perhaps a better strategy would be to just annotate the remainder loop with some metadata to limit the vectorization factor and just rerun the vectorizer.

Why would this solution (annotating the remainder loop to limit vectorization and rerunning the vectorization process) not be preferred regardless?

One issue that might be relevant here are runtime aliasing checks, which are probably going to be redundant, or partially redundant, between the different vectorized versions. Will we be able to do any necessary cleanup after the fact? Moreover, we often don't hoist (unswitch) these checks out of inner loops (perhaps because they're inside the trip-count checks?); I wonder if the proposed block structure will make this situation better or worse (or have no overall effect).

Thanks again,


Please refer attached file (BlockLayout.png) for the details about transformed block layout.

Patch is available at: https://reviews.llvm.org/D30247


LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>


LLVM Developers mailing list

llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>



Hal Finkel

Lead, Compiler Technology and Programming Languages

Leadership Computing Facility

Argonne National Laboratory
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170226/1d385f40/attachment.html>

More information about the llvm-dev mailing list