<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 24, 2017 at 12:39 PM, Mehdi Amini <span dir="ltr"><<a href="mailto:mehdi.amini@apple.com" target="_blank">mehdi.amini@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><br><div><span class="gmail-"><blockquote type="cite"><div>On Jan 24, 2017, at 11:59 AM, Chandler Carruth <<a href="mailto:chandlerc@gmail.com" target="_blank">chandlerc@gmail.com</a>> wrote:</div><br class="gmail-m_8015369451404605227Apple-interchange-newline"><div><div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Jan 24, 2017 at 8:19 AM Mehdi Amini via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word" class="gmail-m_8015369451404605227gmail_msg"><div class="gmail-m_8015369451404605227gmail_msg"><div class="gmail-m_8015369451404605227gmail_msg"><br class="gmail-m_8015369451404605227gmail_msg"></div></div></div><div style="word-wrap:break-word" class="gmail-m_8015369451404605227gmail_msg"><div class="gmail-m_8015369451404605227gmail_msg"><div class="gmail-m_8015369451404605227gmail_msg">Another view of it is that there should be another handle that triggers the invalidation of the analysis when this IR is changed: i.e. keeping the analysis cached while it holds a reference to the IR can be seen as the problem.</div></div></div></blockquote><div><br></div><div>The issue is only with Asserting VH, not with other kinds of value handles though. So with those, we would have different invalidation in NDEBUG from !NDEBUG? Or would you always do this invalidation?</div></div></div></div></blockquote><div><br></div></span><div>Not sure why would the invalidation be different? If you have an AssertingVH in your analysis, you need a mechanism to invalidate it before the assertion fires. But that invalidation needs to happens both in NDEBUG and !NDEBUG consistently (even if there wouldn’t be an assertion in NDEBUG mode if this is violated) .</div><span class="gmail-"><div><br></div><blockquote type="cite"><div><div dir="ltr"><div class="gmail_quote"><div>I guess I'm not too clear what the concrete approach here would be.</div></div></div>
</div></blockquote></span></div><br><div><div>I should have mentioned that I didn’t give much thought about a concrete plan, it was more an abstract view on the semantic of holding AssertingVH. </div></div><div>Deciding that AssertingVH is the wrong semantic for a given analysis can be OK (I believe this is what you did right?), and there might not be any practical alternative (to provide invalidation on the fly when IR is deleted).</div></div></blockquote><div><br></div><div>This basically boils down to whether we want an approach that does invalidation at the boundaries of transformation passes, or do we want invalidation at a finer granularity. Right now it is at the boundary of transformation passes. I thought about doing it at a finer granularity when I was working on the AnalysisManager stuff and my conclusion was that it is not practical to provide general guarantees about arbitrary analysis invalidation in the middle of the run of a transformation. The reason was that it would require an extremely elaborate "observer" infrastructure. Consider a simple function analysis that computes the number of basic blocks in the function (and its cached analysis result contains that number). How would you update/invalidate that at finer granularity?</div><div><br></div><div>One way to think about this is essentially a principle where transformation passes are treated like "grown-ups": If the transformation transforms the IR in a way that would invalidate an analysis result that it needs to look at, the transformation is responsible for knowing about this and invalidating/updating appropriately.</div><div>For example, if a transformation pass queries the simple NumberOfBBsInFunctionAnalysis that I mentioned above, then the transformation pass is responsible for knowing that if it inserts or removes a basic block, it must invalidate/update the NumberOfBBsInFunctionAnalysis analysis result before it next uses it.</div><div>However, if a transformation pass doesn't query NumberOfBBsInFunctionAnalysis, then it doesn't need to update it, and that is harmless since the transformation pass will not say that it preserves NumberOfBBsInFunctionAnalysis and the stale analysis result will be removed after the transformation is done.</div><div><br></div><div>The primary issue with this kind of approach is that a transformation pass must know about all analyses that it *transitively* depends on. This is not a problem for most analyses since they have a static set of dependencies. Also, the transformation pass likely is actually directly using one of those dependencies that needs to be updated anyway; for example, an analysis might depend on domtree, but the transformation pass is likely to already be using domtree explciitly so it's already Doing The Right Thing based on the "grown-up" principle above. One place where this is more problematic is for AA where the set of dependencies is dynamic and a transformation pass obviously can't be aware of what is actually depended on dynamically. There's probably some unwritten/unspoken/un-thought-about restrictions keeping things working currently and we probably want to formalize those restrictions some day. For example, I don't think it would be sane for the correctness of an AA query to depend on an up to date NumberOfBBsInFunctionAnalysis. Presumably, it is okay for an AA analysis to depend on a non-stale domtree though; this is equivalent to saying that a transformation pass that uses AA must ensure that it keeps domtree up to date.</div><div><br></div><div><br></div><div>Also keep in mind that this is not the only instance where "stale" analysis results might hang around for a while, and the other situations result in needing similar sorts of restrictions on what passes can/can't do and what they *must* do to ensure correctness. Invalidation of module analyses doesn't happen until the PreservedAnalyses set bubbles up to the Module-level PassManager, which holds the ModuleAnalysisManager. For smaller IRUnit's, the ModuleAnalysisManager is const. So we are living with "stale" module analysis results across arbitrarily large amounts of function transformations. The current approach is that module analyses must update themselves or be conservatively correct when the IR changes in smaller IRUnitT's. The same issue happens for any Outer:Inner pair. E.g. function analyses that happen to be invalidated by loop transformations; in that case I think the current approach is that loop transformations must keep the function analyses up to date themselves (actually the FunctionAnalysisManager is const for loop transformations; Chandler, how does this work currently? I guess I haven't been following closely enough. E.g. if a loop transformation needs to update the domtree how does that happen?).</div><div>Hopefully at some point we'll be able figure out and write down the exact restrictions (some are enforceable (and enforced!) with assertions in the code, but others, like staying conservatively correct, are not).</div><div><br></div><div>-- Sean Silva</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><div><br></div><div>— </div><span class="gmail-HOEnZb"><font color="#888888"><div>Mehdi</div><div><br></div></font></span></div></blockquote></div><br></div></div>