[PATCH] D96168: [AssumptionCache] Avoid dangling llvm.assume calls in the cache

Johannes Doerfert via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Feb 9 21:19:19 PST 2021


jdoerfert added inline comments.


================
Comment at: llvm/include/llvm/Analysis/AssumptionCache.h:81
+  /// Set of value handles for calls of the \@llvm.assume intrinsic.
+  using AssumeHandleSet = DenseSet<AssumeHandle, DenseMapInfo<Value *>>;
+  AssumeHandleSet AssumeHandles;
----------------
Meinersbur wrote:
> jdoerfert wrote:
> > Meinersbur wrote:
> > > Changing this list to a DenseSet introduces indeterminism in Polly's test case `ScopInfo/user_provided_assumptions.ll` and even in `basic.ll` that is changed in this patch. ScalarEvolution is known to give different result depending on the order in which elements are processed. `PredicateInfo` might emit different IR depending on the order as well.
> > > 
> > > I think we should play it save and changed it back to a `SmallVector`. Unfortunately, a SetVector instead would be of no use since the removing an element still searches the element in the vector.
> > > 
> > I'm not sure I follow. If the user would benefit from more assumptions that are now "later" in the traversal, they should scan on. `basic.ll` is a print method which can be non-determistic, IMHO. Long story short, I'd say SCEV or Polly should find the best information among all assumptions.
> In the case of Polly, the added assumption are simplified using previous assumptions. This gives indeterministic results such as
> ```
> remark: <unknown>:0:0: Use user assumption: [Debug, M] -> {  : Debug = 0 and 0 < M <= 100 }
> remark: <unknown>:0:0: Use user assumption: [Debug, M, N] -> {  : Debug = 0 and 0 < M <= 100 and N >= -2147483648 - M }
> remark: <unknown>:0:0: Use user assumption: [Debug, M, N] -> {  : Debug = 0 and 0 < M <= 100 and N > 0 }
> remark: <unknown>:0:0: Use user assumption: [Debug, M, N] -> {  : Debug = 0 and 0 < M <= 100 and 0 < N <= 2147483647 - M }
> ```
> 
> ```
> remark: <unknown>:0:0: Use user assumption: [N, M] -> {  : M >= -2147483648 - N }
> remark: <unknown>:0:0: Use user assumption: [N, M] -> {  : N > 0 and M >= -2147483648 - N }
> remark: <unknown>:0:0: Use user assumption: [N, M] -> {  : N > 0 and -2147483648 - N <= M <= 2147483647 - N }
> remark: <unknown>:0:0: Use user assumption: [N, M, Debug] -> {  : Debug = 0 and N > 0 and 0 < M <= 2147483647 - N and M <= 100 }
> ```
> 
> ```
> remark: <unknown>:0:0: Use user assumption: [M, N] -> {  : N <= 2147483647 - M }
> remark: <unknown>:0:0: Use user assumption: [M, N] -> {  : -2147483648 - M <= N <= 2147483647 - M }
> remark: <unknown>:0:0: Use user assumption: [M, N, Debug] -> {  : Debug = 0 and 0 < M <= 100 and -2147483648 - M <= N <= 2147483647 - M }
> remark: <unknown>:0:0: Use user assumption: [M, N, Debug] -> {  : Debug = 0 and 0 < M <= 100 and 0 < N <= 2147483647 - M }
> ```
> 
> i.e. it's not just the order in which they are printed.
> 
> Even if that can be fixed in Polly, as already mentioned the result ScalarEvolution returns depends on cached SCEVs (e.g. to resolve recursion), i.e. its result depends on previous queries, as much we dislike that. See for instance D74810.
I think I miss the point about ScalarEvolution here, is it impacted by the order of assumptions? 

For polly, just to check, the set's are equivalent, right?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D96168/new/

https://reviews.llvm.org/D96168



More information about the llvm-commits mailing list