[llvm] r314435 - [JumpThreading] Preserve DT and LVI across the pass

Daniel Berlin via llvm-commits llvm-commits at lists.llvm.org
Fri Oct 13 13:04:20 PDT 2017


On Fri, Oct 13, 2017 at 11:05 AM, Brian M. Rzycki <brzycki at gmail.com> wrote:

> That may actually be fast enough already if you do that, which i'm hoping
>> (but you know better than I) is not a lot more work than you have here.
>
> There are a couple of complications with this method as I think about it.
> The first is the existence of DT used outside of the Jumpthreading pass:
> Local.cpp and other helper functions like the ones in BasicBlockUtils.cpp.
> I'll have to flush before calling these sites, or rework the interfaces to
> them to have an optional vector passed around. The hand-off between these
> separate functions will require me figuring out the most efficient way of
> updating the DT.
>

Right. This is what i said previously, it's a bit sucky.
IMHO, the right thing to do is to make a simple updater class and pass it
around.
For starters, it can just wrap a vector of updates and have the simple API
i gave.
Add eraseBlock. and internally make it call applyUpdates.
(Like i said, we can probably remove this limitation later, but we will
have to do some bookkeeping anyway, so having eraseBlock there would  work)


>
> The second is I haven't done the analysis of exactly where I need the DT
> analysis preserved within the JT pass. Part of the reason we did the work
> this way was so we didn't have to worry about the state of DT, we could
> just use it at any point. This proposal changes it to be a much more
> explicit sync-then-analysis event.
>

Yes. But at the current cost, i can't see how you can make this work.
We can make batching more efficient.
There are no real known algorithms to make individual
single-update-at-a-time updating significantly more efficient than we are
doing (there are a couple optimizations, but profiling this change, they
don't cover enough of the cases).
For example, it's possible to make 'nodes that are eventually deleted in
your sequence of batch updates' nearly constant time in batch updating.
Ditto for nodes that become leaves, etc.
None of this is possible in single update-at-a-time.




> I suspect both of these will cause more frequent than "block is deleted so
> flush DT" style code. It might be more efficient to batch the updates, I'm
> not sure.
>
> Finally, I'd need a way to detect if the root block is to be removed from
> the function: in that case I have to recalculate the DT.
>

We could fix this in general by virtual rooting both trees, instead of just
postdom.

But there is another thing we could do:
You can only ever replace the entry block with a different block as an
entry, you can't just remove it.
That new block should have no predecessors

As such, it should be a trivial update of reattaching the disconnected
block as the new root.
In fact, i bet we could do this as something like this:
implement swap for DT nodes
swap the old entry and the new entry node data
remove the old entry node

and even the following may work
Update dt.roots
temp = DT.nodemapping[oldentry]
DT.nodemapping.erase(oldentry)
DT.nodemapping.insert(newentry, temp)

But one thing at a time.


> If this is the recommendation going forward I can start on this work.
>
> IMHO, i can't see another way to do this.  There are no known algorithms
to make what we are doing faster, and while it's a fertile area of
research, i don' think we will be the ones to come up with a breakthrough
here :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20171013/f0b55f25/attachment.html>


More information about the llvm-commits mailing list