[LLVMdev] Dependence Analysis [was: Flow-Sensitive AA]

David Greene dag at cray.com
Wed Sep 3 13:06:46 PDT 2008


On Wednesday 03 September 2008 12:39, Matthieu Delahaye wrote:
=
> API: This is the matter of providing the possibilities to ask useful
> questions, and providing useful answers. [in the pow of the passes that
> are using the analysis].
>
> The "textbook" version would be: give me the memory dependency(ies)
> between these two instructions. With the possibility to select the
> "dialect" of the answer:
> - There is a dependency, there is not, or maybe
> - Direction vectors
> - Distance vectors
> ...
>
> For many of us, including myself, this is all I would need. But if I

Right.

> would be working on vectorization, another question like "Tell me
> whether there is no distance vector with a distance on the last level
> less than 4" is more meaningful. And:

So why not have the caller examine the distance vectors?  We don't need
an additional API to ask this question.  The analysis should just do the
analysis, it doesn't need to know every possible use of the results.

> - this can be proved far faster than generating the whole dependencies
> (I have seen 50x speedup figures, but this is implementation dependent)

Just to clarify, are you saying answering the "any distance < 4" question can 
be faster than generating all the distance vectors?  Ok, yeah, I can believe
that, so we probably do want an API for that.  So ignore my previous 
paragraph.  :)

> - could be proved even in some cases where the Omega test cannot.

Ok, I can believe that too.

> So two questions: Is there any other kind of queries? Are they actually
> useful?

If we need more queries later we can always add them.

The way I would approach this is to implement the basic API (yes/no/maybe
and distance/direction vectors) and let clients use that information to derive
their own answers.  After some experience with that we'll have a better sense
of what other APIs might be useful.  Then we can start adding things like the
"distance < N" query.

> Now for the "useful" answers: We have to admit that the proportion of
> loops whose upper bounds are known constant values at compile-time is
> decreasing rapidly... 

Depends what you mean by "known" and "at compile time."  It's not uncommon
to generate conditional code where there are runtime checks for loop bounds 
and a selection among code generated with different parallelization 
strategies.  So strictly speaking the loop bounds are not statically known but 
the compiler makes them known to itself for special cases by inserting runtime 
tests.

The compiler can also use symbol information to infer minimum and maximum
bounds.  For example, it's reasonable to assume a loop won't iterate beyond 
the bounds of the arrays used in the loop, so if you have symbol information 
you can figure more things out than what you'd get from simply examining the 
code.  Range analysis can also play a role here.

> Some analysis handle that well at the cost of potentially generating 
> different results predicated by conditions on these symbolic values.

You read my mind.  :)  Though what do you mean by "different results" here?  
Different code?  I hope you don't mean "different answers."  :)

> The  expression of these predicates might not be  trivial.

Funny story, I've seen literally PAGES of scalar code that attempts to figure 
out whether vectorization applies at runtime and then selects the vector code 
if it does.  Usually vectorization applies and the vector code is so fast it 
more than makes up for the additional computation involved in the checks.

> This is what I had in mind when I talked above the difficulty of having
> a good API that do not change every time a new analysis is added. I
> doubt this API could be defined without the review of analysis users and
> writers.

But ultimately these "tricks" such as runtime decision-making are not under 
the purvue of the memory dependence analysis.  In these cases the compiler is 
"forcing" an answer it wants by create code paths where it knows certain 
properties.  There's no analysis necessary because the compiler created those 
paths in the first place with very specific goals in mind.  I think the
interface you've outlined above is sufficient for now.  We'll gain experience 
and add to it later if necessary.

> Now about the chaining: it is a matter of precision. We want either the
> exact result or, when it is not possible, we might want the most precise
> conservative answer [when we want to know more than yes/no/maybe].

Right.

> The solution, as you describe it, is ok as long as one can provide an
> exact result. However, when no test can:
> - You may have obtained an exact result by making them work together
> rather than independently.

Possibly, though I don't have a good picture of what "work together" looks 
like.  When I think about alias analysis, for example, I think of cheaper 
analyses chaining to more expensive ones.  The more expensive ones subsume the 
functionality of the cheaper ones.  This property may not always hold for 
memory dependence analysis.  But in those cases, is it really possible to 
combine results and come up with something more precise?  I don't know.
Again, I think it's reasonable to start with a model based on how alias 
analysis works and then enhance the model if we find cases where this kind of 
interaction is beneficial.

The key is not to make the perfect the enemy of the good.  Get something going 
now.  We can always make it better later.

> - What conservative answer to provide when each test failed, but
> different conservative answers are provided from more than two tests?

I would think most of the time it would be pretty clear (you want to take the 
greatest distance, for example).  For those cases where it's not clear, just 
pick one for now.  We'll gain experience and tune it later.

                                                       -Dave




More information about the llvm-dev mailing list