[llvm] r314435 - [JumpThreading] Preserve DT and LVI across the pass
Daniel Berlin via llvm-commits
llvm-commits at lists.llvm.org
Fri Oct 13 12:38:45 PDT 2017
On Fri, Oct 13, 2017 at 10:48 AM, Brian M. Rzycki <brzycki at gmail.com> wrote:
> I *believe* the motivation is that they want DT up to date *in-pass* so
>> they can use it do to better jump threading over loops.
>
> Daniel, you are correct in this assumption. Dominance is the first step in
> enhancing the kinds of analysis and extend the number of Jumpthreading
> opportunities. Analysis of loops is part of this work.
>
> *that* kind of change, IMHO, should be subject to our usual "how much
>> faster code does it make vs how much does it cost" tradeoff.
>
> I have not started the work on actual code optimizations simply because I
> could not intelligently make the analysis decisions needed. Rather than
> create a huge change to JT (or another pass) Sebastian and I decided to
> incrementally update JT with the features we needed to get us to the end
> goal.
>
> This was *our* plan to remove the recalculates, but it does not appear to
>> have be what Sebastian/etc want for JT.
>
> I'm a bit confused by this statement: that's what the patch is doing.
>
You are confused because i was talking about the recalculations that occur
elsewhere.
But let me try to diagram it for you.
Here is the equivalent of what JT did before your patch
iterate
for each bb
maybe change an edge and some code
call some utilities
for each dead block
erase it
DT->recalculate
Here is what your patch would look like without the incremental API
existing:
iterate
for each bb
maybe change an edge and some code
for each edge we changed:
DT->recalculate
call some utilities, calling DT->recalculate in each one
for each dead block
erase it
DT->recalculate
You can see that this has added a O(number of changes ) DT->recalculate
calls compared to what happened before.
That would be ridiculously slow. Probably 100x-1000x as slow as the first
case.
You then have changed that O(number of changes) set of calls to use the
incremental API, so that it looks like this:
iterate
for each bb
maybe change an edge and some code
for each edge we changed:
incremental update DT
call some utilities, calling incremental update DT in each one
for each dead block
erase it
incremental update DT
These calls, unfortunately are not O(1) and will never be O(1) (doing so is
provably impossible)
So it's not an O(N) cost you've added it, it's N^2 or N^3 in the worst case.
There's only one corner-case where a recalculate occurs. The rest of the
> time it's incremental updates. There may still be opportunities to batch
> updates but I honestly have no idea if it will be more efficient.
>
It will, because it will cause less updating in practice.
If jump threading changes edge A->B to A->C (IE insert new edge, remove old
edge) and then changes edge A->C to A->D , you will perform two updates.
If those require full recalculation of the tree (internal to the updater),
you will update the tree twice.
Batching will perform one.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20171013/fde01f89/attachment.html>
More information about the llvm-commits
mailing list