[LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch

Hal Finkel hfinkel at anl.gov
Mon May 14 15:52:58 PDT 2012


On Mon, 14 May 2012 15:18:12 -0700
Preston Briggs <preston.briggs at gmail.com> wrote:

> On Mon, May 14, 2012 at 2:14 PM, Hal Finkel <hfinkel at anl.gov> wrote:
> >> > One thing that I would like to mention is that 'use' here should
> >> > also include user feedback. This means being able to pass
> >> > information back to the frontends about which loops are being
> >> > effectively analyzed, and for loops that are not, why not.
> >>
> >> Absolutely. I've been thinking in terms of passing info back to the
> >> programmer (see
> >> https://sites.google.com/site/parallelizationforllvm/feedback).
> >> It's a very interesting problem and one where I think there are
> >> real research possibilities.
> >
> > Do you think that we should do this by adding metadata, or should we
> > establish some kind of separate feedback channel? Metadata would
> > make it more useful for writing regression tests (perhaps), but a
> > separate feedback channel might be more useful for the front ends.
> > Maybe we should have a separate feedback channel that, lacking any
> > other consumer, writes out metadata?
> 
> I don't know what's best.  Probably different uses merit different
> mechanisms.

While this is true, I would prefer we have a framework to handle this
so that it does not turn into a 'mechanism zoo'.

> 
> At Tera, we did regression tests by using command-line flags to
> provoke particular passes to dump so-called signature information to
> stderr. For example, -trace:PAR_SIG would cause the parallelizer to
> dump out a condensed account of what it did for each loop nest.
> Similarly, -trace:LS_SIG would cause the loop scheduler (i.e.,
> software pipeliner) to dump a summary for each inner loop. As part of
> each night's tests, we'd compare the signatures against standards and
> report differences.
> 
> Later, we developed a tool called "Canal" that took essentially the
> same information and used it to produce an annotated listing, where
> each loop nest was marked to indicate what the parallelizer had done
> to it, each inner loop was decorated with info about what happened
> during software pipelining, etc. Still later, we built a GUI-tool to
> report the same info in a little more convenient fashion.

This is what I would like to enable, but what I don't want is to have
this involve parsing a bunch of arbitrarily-formatted text strings
produced by different backend passes. Structured text is fine I think,
we'd just need a way of attaching it to source locations and conveying
it to the frontends. metadata is fine (as that is also structured),
although it has the disadvantage that later optimization passes might
drop it(?).

> 
> The most useful information came from the various transformation
> passes. For example, the parallelizer reporting that certain
> loop-carried dependences prevented parallelization; the software
> pipeliner would report on the length of inner loops (in ticks) , the
> balance between memory references and floating-point ops, and so
> forth.

This certainly makes sense. FWIW, I think that we should have a system
such that the transformation passes can trigger sending analysis
information back to the frontend as part of this process. Loop-carried
dependencies are one good use case, aliasing analysis information is,
IMHO, another.

 -Hal

> 
> Preston



-- 
Hal Finkel
Postdoctoral Appointee
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-dev mailing list