[llvm-dev] Should analyses be able to hold AssertingVH to IR? (related to PR28400)
Sean Silva via llvm-dev
llvm-dev at lists.llvm.org
Thu Jan 26 21:22:27 PST 2017
On Tue, Jan 24, 2017 at 12:39 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:
> On Jan 24, 2017, at 11:59 AM, Chandler Carruth <chandlerc at gmail.com>
> On Tue, Jan 24, 2017 at 8:19 AM Mehdi Amini via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>> Another view of it is that there should be another handle that triggers
>> the invalidation of the analysis when this IR is changed: i.e. keeping the
>> analysis cached while it holds a reference to the IR can be seen as the
> The issue is only with Asserting VH, not with other kinds of value handles
> though. So with those, we would have different invalidation in NDEBUG from
> !NDEBUG? Or would you always do this invalidation?
> Not sure why would the invalidation be different? If you have an
> AssertingVH in your analysis, you need a mechanism to invalidate it before
> the assertion fires. But that invalidation needs to happens both in NDEBUG
> and !NDEBUG consistently (even if there wouldn’t be an assertion in NDEBUG
> mode if this is violated) .
> I guess I'm not too clear what the concrete approach here would be.
> I should have mentioned that I didn’t give much thought about a concrete
> plan, it was more an abstract view on the semantic of holding AssertingVH.
> Deciding that AssertingVH is the wrong semantic for a given analysis can
> be OK (I believe this is what you did right?), and there might not be any
> practical alternative (to provide invalidation on the fly when IR is
This basically boils down to whether we want an approach that does
invalidation at the boundaries of transformation passes, or do we want
invalidation at a finer granularity. Right now it is at the boundary of
transformation passes. I thought about doing it at a finer granularity when
I was working on the AnalysisManager stuff and my conclusion was that it is
not practical to provide general guarantees about arbitrary analysis
invalidation in the middle of the run of a transformation. The reason was
that it would require an extremely elaborate "observer" infrastructure.
Consider a simple function analysis that computes the number of basic
blocks in the function (and its cached analysis result contains that
number). How would you update/invalidate that at finer granularity?
One way to think about this is essentially a principle where transformation
passes are treated like "grown-ups": If the transformation transforms the
IR in a way that would invalidate an analysis result that it needs to look
at, the transformation is responsible for knowing about this and
For example, if a transformation pass queries the simple
NumberOfBBsInFunctionAnalysis that I mentioned above, then the
transformation pass is responsible for knowing that if it inserts or
removes a basic block, it must invalidate/update the
NumberOfBBsInFunctionAnalysis analysis result before it next uses it.
However, if a transformation pass doesn't query
NumberOfBBsInFunctionAnalysis, then it doesn't need to update it, and that
is harmless since the transformation pass will not say that it preserves
NumberOfBBsInFunctionAnalysis and the stale analysis result will be removed
after the transformation is done.
The primary issue with this kind of approach is that a transformation pass
must know about all analyses that it *transitively* depends on. This is not
a problem for most analyses since they have a static set of dependencies.
Also, the transformation pass likely is actually directly using one of
those dependencies that needs to be updated anyway; for example, an
analysis might depend on domtree, but the transformation pass is likely to
already be using domtree explciitly so it's already Doing The Right Thing
based on the "grown-up" principle above. One place where this is more
problematic is for AA where the set of dependencies is dynamic and a
transformation pass obviously can't be aware of what is actually depended
on dynamically. There's probably some unwritten/unspoken/un-thought-about
restrictions keeping things working currently and we probably want to
formalize those restrictions some day. For example, I don't think it would
be sane for the correctness of an AA query to depend on an up to date
NumberOfBBsInFunctionAnalysis. Presumably, it is okay for an AA analysis to
depend on a non-stale domtree though; this is equivalent to saying that a
transformation pass that uses AA must ensure that it keeps domtree up to
Also keep in mind that this is not the only instance where "stale" analysis
results might hang around for a while, and the other situations result in
needing similar sorts of restrictions on what passes can/can't do and what
they *must* do to ensure correctness. Invalidation of module analyses
doesn't happen until the PreservedAnalyses set bubbles up to the
Module-level PassManager, which holds the ModuleAnalysisManager. For
smaller IRUnit's, the ModuleAnalysisManager is const. So we are living with
"stale" module analysis results across arbitrarily large amounts of
function transformations. The current approach is that module analyses must
update themselves or be conservatively correct when the IR changes in
smaller IRUnitT's. The same issue happens for any Outer:Inner pair. E.g.
function analyses that happen to be invalidated by loop transformations; in
that case I think the current approach is that loop transformations must
keep the function analyses up to date themselves (actually the
FunctionAnalysisManager is const for loop transformations; Chandler, how
does this work currently? I guess I haven't been following closely enough.
E.g. if a loop transformation needs to update the domtree how does that
Hopefully at some point we'll be able figure out and write down the exact
restrictions (some are enforceable (and enforced!) with assertions in the
code, but others, like staying conservatively correct, are not).
-- Sean Silva
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev