[LLVMdev] [BBVectorizer] Obvious vectorization benefit, but req-chain is too short

Pekka Jääskeläinen pekka.jaaskelainen at tut.fi
Sat Feb 4 06:32:27 PST 2012


Thanks for your work on the bb-vectorizer. It looks like a
promising pass to be used for multi-work-item-vectorization in

On 02/04/2012 06:21 AM, Hal Finkel wrote:
> Try it now (after r149761). If this "solution" causes other problems,
> then we may need to think of something more sophisticated.

I wonder if the case where a store is the last user of the value could be
treated as a special case. The case where the code reads, computes
and writes values in a fully parallelizable (unrolled) loop is an
optimal case for vectorizing as it might not lead to any unpack/pack
overheads at all.

In case of the bb-vectorizer (if I understood the parameters correctly),
if the final store (or actually, any final consumer of a value) is weighed
more heavily in the "chain length computation" it could allow using a
large chain length threshold and still pick up these "embarrassingly parallel
loop cases" where there are potentially many updates to different variables
in memory, but with short preceding computation lengths. This type of
embarrasingly parallel loop cases are the basic case when vectorizing
multiple instances of OpenCL C kernels which are parallel by definition.

E.g. a case where the kernel does something like:

A = read mem
B = read mem
C = add A, B
write C to mem

D = read mem
E = read mem
F = mul D, E
write F to mem

When this is parallelized N times in the work group, the vectorizer
might fail to vectorize multiple "kernel iterations" properly as the
independent computation chains/live ranges (e.g. from D to F) are quite
short. Still, vectorization is very beneficial here as, like we know, fully
parallel loops vectorize perfectly without unpack/pack overheads in case
all the operations can be vectorized (such is the case here when one can
scale the work-group size to match the vector width).


More information about the llvm-dev mailing list