[PATCH] D31239: [WIP] Add Caching of Known Bits in InstCombine

Daniel Berlin via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 22 14:47:52 PDT 2017


On Wed, Mar 22, 2017 at 2:34 PM, Michael Zolotukhin via Phabricator <
reviews at reviews.llvm.org> wrote:

> mzolotukhin added a comment.
>
> Thanks for posting this, Hal!
>
> I'll run some experiments with it, but in general I support this approach.
>
> @rnk:
>
> > Long term, I agree with Danny that computing known bits on demand
> doesn't seem to scale. We might want to just do the analysis up front and
> see if that works better.
>
> The problem with computing everything upfront is that at some point we
> might want to partially invalidate the results. Recomputing everything from
> scratch after it  sounds inefficient,


(assume it is inefficient for a second)
What makes you think you have to do that?
Why are the only two options "compute on demand" and "build from scratch"
and not "incrementally recompute"?

Most of these are lattice propagation problems.
Most are incrementally recomputable by resetting the lattice state of the
members of the def use chain being touched here, and starting propagation
from the first one.
(the ones that aren't are where they use things like reachable block state,
but even that is fixable).

FWIW: It also may not be inefficient. Again, we are not building a
super-optimizer here, so it may good enough to do it, say, twice, and live
with the fact that we are never going to minimize every possible expression
to the smallest possible state. What matters is whether the performance of
generated code suffers or not.

The example i usually trot out here is that gcc computes aliasing *twice*,
in the entire compiler. It beats us on pretty much every alias precision
metric and ability to do memory optimization metric you can find.  So the
"completely stateless" work we do in basicaa, to have everything be the
best it can be, at all times, has still not won there
There are plenty of other examples.



> and if we add caching, we'll end up with something like what's proposed
> here.


I think you meant to write something else, because if we add caching, we
definitely end up with something like what is proposed, since caching is
what is proposed ;)
If you mean it would look similar to incremental recomputation:
it is definitely *much* easier to verify the state of the analysis is sane
if you do incremental updating instead of caching.

IE trivially, at pretty much any point, you can say "recompute what this
*should* look like against the function*, and compare to what you got from
incremental recomputation.

Local caching like this is very hard to verify.


> To me this looks very similar to what we have in SCEV (a map from
> expression to the corresponding analysis result, that we can invalidate and
> recompute whenever it's requested again), and I think SCEV has been working
> pretty well so far in this aspect.
>
> Michael
>
>
> https://reviews.llvm.org/D31239
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170322/9edd7f95/attachment.html>


More information about the llvm-commits mailing list