[LLVMdev] Vectorization: Next Steps

Hal Finkel hfinkel at anl.gov
Sat Feb 4 12:27:44 PST 2012


On Fri, 2012-02-03 at 20:59 -0800, Preston Briggs wrote:
> Hal wrote:
> > 3. Loop vectorization - It would be nice to have, in addition to
> > basic-block vectorization, a more-traditional loop vectorization
> pass. I
> > think that we'll need a better loop analysis pass in order for this
> to
> > happen. Some of this was started in LoopDependenceAnalysis, but that
> > pass is not yet finished. We'll need something like this to
> recognize
> > affine memory references, etc.
> 
> I've recently started working on a dependence analyzer for LLVM.

Great!

>   We're more interested in parallelization than vectorization,

I am also interested in parallalization, but that is another story ;)

> so are building a dependence graph for a complete function.  Of
> course, such a thing is useful for vectorization and all sorts of
> other dependence-based loop transforms.
> 
> I'm looking at the problem in two parts:
>      1. a dependence test that, given two memory references, decides
>         if a dependence exists between them, and
>      2. the dependence-graph builder that looks over the complete
>         function and finds pairs of memory references to pass to the
>         dependence test, using the results to build a dependence
>         graph, with edges labeled as to the kind of dependence and
>         direction vectors.
> Currently I'm focused on (2), the simpler task I think, as a way to
> lean my way around infrastructure. The idea, roughly, is to find all
> the alias sets associated with each load and store and compute what
> Wolfe calls factored redef-use chains for each alias set (similar to
> SSA, but supporting computation of all 4 kinds of dependences). By
> exploring these chains, we can find the right pairs to test,
> accounting for control flow and the confusion caused by procedure
> calls, etc.
> 

This sounds very useful.

> 
> I'm not yet sure how I'll proceed for (1).  It's seems natural and
> wise to take advantage of the scalar evolutions, but they're new to me
> and I'll need to study them more.  I'd like to provide an interface
> that's independent of (2), so that it can be used independently, e.g.,
> by a software pipeliner. 

This makes sense.

> I'd also like to provide a caching interface, so we can avoid
> re-testing pairs of references with identical subscript forms.  It
> also seems desirable to start with relatively simple tests, and expand
> the coverage over time.  We'll see how it goes.

For both (1) and (2), if you'd like to sketch out the interface you have
in mind, I'd be happy to provide some feedback.

On a practical note, I think that it is important that the mechanisms
developed for this support both direct multidimensional indexing
(a[i][j][k]) and also manually-expanded indexing (a[n*(j+n*i) + k]).
Recognizing the latter may require some specific idiom-recognition code
(and scalar evolution analysis should be very useful for this).

 -Hal 

> 
> 
> Preston
> 
> 

-- 
Hal Finkel
Postdoctoral Appointee
Leadership Computing Facility
Argonne National Laboratory




More information about the llvm-dev mailing list