[PATCH] D12905: [SCEV][LV] Introduce SCEV Predicates and use them to re-implement stride versioning

Adam Nemet via llvm-commits llvm-commits at lists.llvm.org
Sat Oct 3 21:56:28 PDT 2015


anemet added inline comments.

================
Comment at: include/llvm/Analysis/LoopAccessAnalysis.h:470-473
@@ -460,6 +469,6 @@
 public:
   LoopAccessInfo(Loop *L, ScalarEvolution *SE, const DataLayout &DL,
                  const TargetLibraryInfo *TLI, AliasAnalysis *AA,
                  DominatorTree *DT, LoopInfo *LI,
-                 const ValueToValueMap &Strides);
+                 const ValueToValueMap &Strides, const unsigned SCEVComplexity);
 
----------------
sbaranga wrote:
> anemet wrote:
> > > My understanding is that the strides in the Strides map can be replaced with 1 if needed. However it is not guaranteed that we will need to replace these strides with 1.
> > 
> > Fair enough.
> > 
> > > I was thinking that maybe it would be a good idea to have some VersioningParameters struct to hold all the parameters that tell us if we can use versioning? So at the moment it would be the strides map and the SCEV complexity. Do you think that would make sense?
> > 
> > Just to be clear, I am not looking for stylistic improvements here but trying to make sense of the semantics.  If I understand correctly SCEVComplexity specifies the cost of the checks that we're allowed to accumulate to make LAA complete its analysis.
> > 
> > I don't understand why that is an incoming parameter.  It seems to me that a better formulation would be to have LAA do its thing, accumulate whatever checks it needs and then make that cost value part of the LAI state.  Then a client pass can query that and decide whether it's willing to pay that cost in order to get an analyzable loop or not.
> > 
> > To put it another way, it seem to me that with the current interface you can have this scenario:
> > 
> > 1. One client specifies a low complexity value.  There is no LAI for the loop so we compute it but fail because we need more checks than allowed.  We *cache* the result.
> > 2. Next client wants to specify a higher value but can't because LAI is already cached plus actually as the code is currently written this will lead to an assert.
> > 
> > So how do you envision this scenario to work?
> > 
> Yes, your analysis is correct and that scenario is a problem.
> 
> I chose the current solution because it was equivalent with the existing implementation. I agree with your assessment.
> 
> Some possible solutions:
>   a) no bounds in LAA on predicate sizes. This can have a negative impact on compile time.
>   b) LAA has its own logic for computing the limit, and we can make sure the limit is high enough to cover all the users.
>   c) we have some initial default bounds, but clients can request an increase by throwing away the cached LAI result and computing a new one (with an increased threshold). Recomputing the LAI would not be ideal.
> 
> I like variant b).
> b) is orthogonal to c), but we theoretically wouldn't need c) right now. 
Do you know about any compile-time or algorithmic complexity issues?

I am not sure if we want to approximate compile time with the cost of the checks.

If we know some worse than linear algorithms here, I much rather control it locally rather hoping that the number of checks would catch it (especially once we start having checks for different type of predicates like non-wrapping, etc.)


http://reviews.llvm.org/D12905





More information about the llvm-commits mailing list