[PATCH] D7864: This patch introduces MemorySSA, a virtual SSA form for memory.Details on what it looks like are in MemorySSA.h

George Burgess IV via llvm-commits llvm-commits at lists.llvm.org
Mon Feb 1 22:08:01 PST 2016


> I'm not sure such a flag makes sense, given the above.  I'm not sure you
*want* to try to alternate between them, because a lot of your order of
magnitude  speedups will come from doing things fundamentally different
than random querying (IE move to more "SSA-like" algorithms).

Then, do you know of a better way to allow people to opt-in/opt-out of
using MemorySSA? Or do you not see the transition to MemorySSA causing many
issues? (I'm assuming it's the latter. :) )

> If you do the optimizations, i'm happy to start to convert passes. I was
going to just do them in close to the order necessary to preserve memssa
from beginning to end of opts that use it, but happy to do whatever. Converting
most passes requires building an update API, and converting a few of them
and seeing what they want out of updating is the best way i can think of to
design that API.

WFM * 2. I'll start with the things that we ripped out, then look at MemDep
to figure out what tricks it has up its sleeve. If there's anywhere else
you would recommend looking?

On Mon, Feb 1, 2016 at 9:37 PM, Daniel Berlin <dberlin at dberlin.org> wrote:

>
>>
>> I don't have any plans. If we want to port existing passes soon, we'll
>> either need an update API, or we'll need to restrict ourselves to passes
>> that don't use the update API.
>
>
> FWIW: For the passes i played with converting,  timing compute memssa +
> use it vs memdep, for basically all testcases i could find, the time
> difference was in the noise.
> For larger cases, memssa is a straight win.
> This is because most of the memdep passes are walking a ton of
> blocks/insts repeatedly anyway.
>
>
>
>> Also, we'll need a wrapper that either queries MemDep or MemorySSA if we
>> want to have a global "use MemorySSA?" flag.
>
>
> I'm not sure such a flag makes sense, given the above.  I'm not sure you
> *want* to try to alternate between them, because a lot of your order of
> magnitude  speedups will come from doing things fundamentally different
> than random querying (IE move to more "SSA-like" algorithms).
>
> (Though i expect using memssa as a straight replacement will still likely
> be faster).
>
>
> If we think making a simple MemorySSA-based DSE pass would be a better
>> first step, I'm fine doing that, as well. Same thing goes for readding the
>> optimizations that were stripped out today.
>>
>>
> If you do the optimizations, i'm happy to start to convert passes. I was
> going to just do them in close to the order necessary to preserve memssa
> from beginning to end of opts that use it, but happy to do whatever.
>
> (mergedloadstoremotion and memcpyoptimizer are the easiest things to
> convert and will give speedups).
>
>
> Converting most passes requires building an update API, and converting a
> few of them and seeing what they want out of updating is the best way i can
> think of to design that API.
>
> (You can see where i played with it in the mergedloadstoremotion mpass).
>
>
>> So long as progress is made, I'm happy. :)
>>
>>
>> http://reviews.llvm.org/D7864
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160201/05914bdf/attachment.html>


More information about the llvm-commits mailing list