<div dir="ltr">Thanks!<div><br></div><div>Some more nitpicking below. :-)<br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 17, 2016 at 4:30 PM, George Burgess IV <span dir="ltr"><<a href="mailto:george.burgess.iv@gmail.com" target="_blank">george.burgess.iv@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>+Danny, so he can correct me if I'm wrong/potentially give better answers for some of these.</div><div><br></div><div>Thanks for the feedback! :)</div><div><br></div><div>Updates pushed in r279007.</div><span><div><br></div>> <span style="font-size:12.8px">All of the ``Foo`` s don't render correctly, unfortunately</span><div><span style="font-size:12.8px"><br></span></div></span><div><span style="font-size:12.8px">Interesting. Looks like the "correct" way to do this is ``Foo``\ s.</span></div><span><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">> </span><span style="font-size:12.8px">It's not entirely clear to me what the last sentence means - I didn't understand what you can and can't look up, and how</span></div><div><span style="font-size:12.8px"><br></span></div></span><div><span style="font-size:12.8px">Fixed.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">> </span><span style="font-size:12.8px">definite -> define</span></div><div><span style="font-size:12.8px"><br></span></div><div>I think the sentence is meant to be read as "that is, phi nodes merge things that are definitely new versions of the variables." So, "definite" seems correct to me.</div><div><br></div></div></blockquote><div><br></div><div>Ah, ok, I see what you mean.</div><div>I'd still suggest a reword, though - I've automatically read that as a typo for "define", and it makes a lot of sense that way.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div></div><div>> <span style="font-size:12.8px">unquantifiable? :-)</span></div><span><div><span style="font-size:12.8px">> </span><span style="font-size:12.8px">It would probably be good to explain why non-relaxed atomics, volatiles and fences count as defs. It's true that they tend to be optimization barriers, but saying they "clobber" memory sounds fishy.</span></div><div style="font-size:12.8px"><div><div></div></div></div><div><span style="font-size:12.8px"><br></span></div></span><div><span style="font-size:12.8px">Conceptually, I don't see a problem with saying that about loads with (>= acquire) ordering. OTOH, I agree that volatiles may be a bit confusing.</span><span style="font-size:12.8px"> I'm uncertain if this wording implies something it shouldn't, but how about:</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">s/otherwise clobber memory in unquantifiable ways/may introduce ordering constraints, relative to other memory operations/</span></div><div><br></div><div><span style="font-size:12.8px">In the end, whether we deem an Instruction a Use or a Def boils down to "does AA claim that this instruction could modify memory?" Practically, both volatile and (>= acquire) atomic loads can have memory ops that depend on them (...which would end up being represented as the things they "clobber"), and MemoryUses can't have dependencies.</span><br></div><span><div><span style="font-size:12.8px"><br></span></div></span></div></blockquote><div><br></div><div>Why not write this out explicitly?</div><div>I think there's a difference between a "def" and "an instruction that introduces ordering constraints". It's true that we tend to treat anything that introduces ordering constraints as a def. But this is just a convenience - defs introduce ordering constraints, and we already have defs, so we can map anything "bad" to a def, and we'll be safe. </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span><div><span style="font-size:12.8px"></span></div><div>> <span style="font-size:12.8px">I looked the description up in MemorySSA.h and it mentions we intentionally choose not to disambiguate defs. Assuming this is a consequence of that choice, documenting it here as well would clear things up. </span></div><div><span style="font-size:12.8px">> (Ok, I got to the "Use optimization" part, and it explains this, but I think the order of presentation makes the whole thing somewhat confusing. It may be better to either prefetch the "use optimization" discussion, or first show an unoptimized form, and then show an optimized one later. Although that may also be rather confusing...)</span></div><div><span style="font-size:12.8px"><br></span></div></span><div>Yeah. :/ Added a "see below" link to hopefully make thing a bit better.</div><span><div><br></div><div>> <span style="font-size:12.8px">Did you mean that the default walker queries some pre-defined AA, but you can create a walker that queries a different one?</span></div><div><span style="font-size:12.8px"><br></span></div></span><div><span style="font-size:12.8px">Yup; fixed.</span></div><div><div class="m_6944445389074008115h5"><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 16, 2016 at 11:52 PM, Michael Kuperstein <span dir="ltr"><<a href="mailto:michael.kuperstein@gmail.com" target="_blank">michael.kuperstein@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks a lot for writing this up, George!<div><br></div><div>I haven't been following the MemorySSA discussions, and I've been waiting for somebody to write up a doc for the implementation so I can come up to speed. </div><div>So, some comments from the perspective of a total MemorySSA noob (that is, the intended audience of this document :-) follow.</div><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On 16 August 2016 at 17:17, George Burgess IV via llvm-commits <span dir="ltr"><<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Author: gbiv<br>
Date: Tue Aug 16 19:17:29 2016<br>
New Revision: 278875<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=278875&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject?rev=278875&view=rev</a><br>
Log:<br>
[Docs] Add initial MemorySSA documentation.<br>
<br>
Patch partially by Danny.<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D23535" rel="noreferrer" target="_blank">https://reviews.llvm.org/D2353<wbr>5</a><br>
<br>
Added:<br>
llvm/trunk/docs/MemorySSA.rst<br>
Modified:<br>
llvm/trunk/docs/AliasAnalysis.<wbr>rst<br>
llvm/trunk/docs/index.rst<br>
<br>
Modified: llvm/trunk/docs/AliasAnalysis.<wbr>rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/AliasAnalysis.rst?rev=278875&r1=278874&r2=278875&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/AliasAna<wbr>lysis.rst?rev=278875&r1=278874<wbr>&r2=278875&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/AliasAnalysis.<wbr>rst (original)<br>
+++ llvm/trunk/docs/AliasAnalysis.<wbr>rst Tue Aug 16 19:17:29 2016<br>
@@ -702,6 +702,12 @@ algorithm will have a lower number of ma<br>
Memory Dependence Analysis<br>
==========================<br>
<br>
+.. note::<br>
+<br>
+ We are currently in the process of migrating things from<br>
+ ``MemoryDependenceAnalysis`` to :doc:`MemorySSA`. Please try to use<br>
+ that instead.<br>
+<br>
If you're just looking to be a client of alias analysis information, consider<br>
using the Memory Dependence Analysis interface instead. MemDep is a lazy,<br>
caching layer on top of alias analysis that is able to answer the question of<br>
<br>
Added: llvm/trunk/docs/MemorySSA.rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/MemorySSA.rst?rev=278875&view=auto" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/MemorySS<wbr>A.rst?rev=278875&view=auto</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/MemorySSA.rst (added)<br>
+++ llvm/trunk/docs/MemorySSA.rst Tue Aug 16 19:17:29 2016<br>
@@ -0,0 +1,358 @@<br>
+=========<br>
+MemorySSA<br>
+=========<br>
+<br>
+.. contents::<br>
+ :local:<br>
+<br>
+Introduction<br>
+============<br>
+<br>
+``MemorySSA`` is an analysis that allows us to cheaply reason about the<br>
+interactions between various memory operations. Its goal is to replace<br>
+``MemoryDependenceAnalysis`` for most (if not all) use-cases. This is because,<br>
+unless you're very careful, use of ``MemoryDependenceAnalysis`` can easily<br>
+result in quadratic-time algorithms in LLVM. Additionally, ``MemorySSA`` doesn't<br>
+have as many arbitrary limits as ``MemoryDependenceAnalysis``, so you should get<br>
+better results, too.<br></blockquote><div><br></div></div></div><div>Replacing AliasSetTracker is also a goal, right?</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+<br>
+At a high level, one of the goals of ``MemorySSA`` is to provide an SSA based<br>
+form for memory, complete with def-use and use-def chains, which<br>
+enables users to quickly find may-def and may-uses of memory operations.<br>
+It can also be thought of as a way to cheaply give versions to the complete<br>
+state of heap memory, and associate memory operations with those versions.<br>
+<br>
+This document goes over how ``MemorySSA`` is structured, and some basic<br>
+intuition on how ``MemorySSA`` works.<br>
+<br>
+A paper on MemorySSA (with notes about how it's implemented in GCC) `can be<br>
+found here <<a href="http://www.airs.com/dnovillo/Papers/mem-ssa.pdf" rel="noreferrer" target="_blank">http://www.airs.com/dnovillo/<wbr>Papers/mem-ssa.pdf</a>>`_. Though, it's<br>
+relatively out-of-date; the paper references multiple heap partitions, but GCC<br>
+eventually swapped to just using one, like we now have in LLVM. Like<br>
+GCC's, LLVM's MemorySSA is intraprocedural.<br>
+<br>
+<br>
+MemorySSA Structure<br>
+===================<br>
+<br>
+MemorySSA is a virtual IR. After it's built, ``MemorySSA`` will contain a<br>
+structure that maps ``Instruction`` s to ``MemoryAccess`` es, which are<br>
+``MemorySSA``'s parallel to LLVM ``Instruction`` s.<br>
+<br>
+Each ``MemoryAccess`` can be one of three types:<br>
+<br>
+- ``MemoryPhi``<br>
+- ``MemoryUse``<br>
+- ``MemoryDef``<br>
+<br>
+``MemoryPhi`` s are ``PhiNode`` , but for memory operations. If at any<br></blockquote><div><br></div></div></div><div>All of the ``Foo`` s don't render correctly, unfortunately.<br></div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+point we have two (or more) ``MemoryDef`` s that could flow into a<br>
+``BasicBlock``, the block's top ``MemoryAccess`` will be a<br>
+``MemoryPhi``. As in LLVM IR, ``MemoryPhi`` s don't correspond to any<br>
+concrete operation. As such, you can't look up a ``MemoryPhi`` with an<br>
+``Instruction`` (though we do allow you to do so with a<br>
+``BasicBlock``).</blockquote><div><br></div></span><div>It's not entirely clear to me what the last sentence means - I didn't understand what you can and can't look up, and how.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+<br>
+Note also that in SSA, Phi nodes merge must-reach definitions (that<br>
+is, definite new versions of variables). In MemorySSA, PHI nodes merge<br></blockquote><div><br></div></span><div>definite -> define</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+may-reach definitions (that is, until disambiguated, the versions that<br>
+reach a phi node may or may not clobber a given variable)<br>
+<br>
+``MemoryUse`` s are operations which use but don't modify memory. An example of<br>
+a ``MemoryUse`` is a ``load``, or a ``readonly`` function call.<br>
+<br>
+``MemoryDef`` s are operations which may either modify memory, or which<br>
+otherwise clobber memory in unquantifiable ways. Examples of ``MemoryDef`` s<br></blockquote><div><br></div></span><div>unquantifiable? :-)<br></div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+include ``store`` s, function calls, ``load`` s with ``acquire`` (or higher)<br>
+ordering, volatile operations, memory fences, etc.<br><br></blockquote><div><br></div></span><div>It would probably be good to explain why non-relaxed atomics, volatiles and fences count as defs. It's true that they tend to be optimization barriers, but saying they "clobber" memory sounds fishy.</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+Every function that exists has a special ``MemoryDef`` called ``liveOnEntry``.<br>
+It dominates every ``MemoryAccess`` in the function that ``MemorySSA`` is being<br>
+run on, and implies that we've hit the top of the function. It's the only<br>
+``MemoryDef`` that maps to no ``Instruction`` in LLVM IR. Use of<br>
+``liveOnEntry`` implies that the memory being used is either undefined or<br>
+defined before the function begins.<br>
+<br>
+An example of all of this overlayed on LLVM IR (obtained by running ``opt<br>
+-passes='print<memoryssa>' -disable-output`` on an ``.ll`` file) is below. When<br>
+viewing this example, it may be helpful to view it in terms of clobbers. The<br>
+operands of a given ``MemoryAccess`` are all (potential) clobbers of said<br>
+MemoryAccess, and the value produced by a ``MemoryAccess`` can act as a clobber<br>
+for other ``MemoryAccess`` es. Another useful way of looking at it is in<br>
+terms of heap versions. In that view, operands of of a given<br>
+``MemoryAccess`` are the version of the heap before the operation, and<br>
+if the access produces a value, the value is the new version of the heap<br>
+after the operation.<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ entry:<br>
+ %p1 = alloca i8<br>
+ %p2 = alloca i8<br>
+ %p3 = alloca i8<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %p3<br>
+ br label %while.cond<br>
+<br>
+ while.cond:<br>
+ ; 6 = MemoryPhi({%0,1},{if.end,4})<br>
+ br i1 undef, label %if.then, label %if.else<br>
+<br>
+ if.then:<br>
+ ; 2 = MemoryDef(6)<br>
+ store i8 0, i8* %p1<br>
+ br label %if.end<br>
+<br>
+ if.else:<br>
+ ; 3 = MemoryDef(6)<br>
+ store i8 1, i8* %p2<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ ; 5 = MemoryPhi({if.then,2},{if.then<wbr>,3})<br>
+ ; MemoryUse(5)<br>
+ %1 = load i8, i8* %p1<br>
+ ; 4 = MemoryDef(5)<br>
+ store i8 2, i8* %p2<br>
+ ; MemoryUse(1)<br>
+ %2 = load i8, i8* %p3<br>
+ br label %while.cond<br>
+ }<br>
+<br>
+The ``MemorySSA`` IR is located comments that precede the instructions they map<br>
+to (if such an instruction exists). For example, ``1 = MemoryDef(liveOnEntry)``<br>
+is a ``MemoryAccess`` (specifically, a ``MemoryDef``), and it describes the LLVM<br>
+instruction ``store i8 0, i8* %p3``. Other places in ``MemorySSA`` refer to this<br>
+particular ``MemoryDef`` as ``1`` (much like how one can refer to ``load i8, i8*<br>
+%p1`` in LLVM with ``%1``). Again, ``MemoryPhi`` s don't correspond to any LLVM<br>
+Instruction, so the line directly below a ``MemoryPhi`` isn't special.<br>
+<br>
+Going from the top down:<br>
+<br>
+- ``6 = MemoryPhi({%0,1},{if.end,4})`` notes that, when entering ``while.cond``,<br>
+ the reaching definition for it is either ``1`` or ``4``. This ``MemoryPhi`` is<br>
+ referred to in the textual IR by the number ``6``.<br>
+- ``2 = MemoryDef(6)`` notes that ``store i8 0, i8* %p1`` is a definition,<br>
+ and its reaching definition before it is ``6``, or the ``MemoryPhi`` after<br>
+ ``while.cond``.<br></blockquote><div><br></div></div></div><div>It's not really clear why this is the case, even though 2 is the only store to %p1.</div><div>Naively, looking at it from the "clobbering" perspective, I'd expect them to be on separate chains, and to have another phi at the entry to while.cond - something like</div><div>; 7 = MemoryPhi({%0, liveOnEntry},{if.end, 2})</div><div>...</div><div>; 2 = MemoryDef(7)</div><div><br></div><div>One option is that queries that depend on alias analysis are left entirely to the walker - but then it's not clear why load %2 MemoryUses(1), rather than 6.</div><div>I looked the description up in MemorySSA.h and it mentions we intentionally choose not to disambiguate defs. Assuming this is a consequence of that choice, documenting it here as well would clear things up. <br></div><div><br></div><div>(Ok, I got to the "Use optimization" part, and it explains this, but I think the order of presentation makes the whole thing somewhat confusing. It may be better to either prefetch the "use optimization" discussion, or first show an unoptimized form, and then show an optimized one later. Although that may also be rather confusing...)</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+- ``3 = MemoryDef(6)`` notes that ``store i8 0, i8* %p2`` is a definition; its<br>
+ reaching definition is also ``6``.<br>
+- ``5 = MemoryPhi({if.then,2},{if.then<wbr>,3})`` notes that the clobber before<br>
+ this block could either be ``2`` or ``3``.<br>
+- ``MemoryUse(5)`` notes that ``load i8, i8* %p1`` is a use of memory, and that<br>
+ it's clobbered by ``5``.<br>
+- ``4 = MemoryDef(5)`` notes that ``store i8 2, i8* %p2`` is a definition; it's<br>
+ reaching definition is ``5``.<br>
+- ``MemoryUse(1)`` notes that ``load i8, i8* %p3`` is just a user of memory,<br>
+ and the last thing that could clobber this use is above ``while.cond`` (e.g.<br>
+ the store to ``%p3``). In heap versioning parlance, it really<br>
+ only depends on the heap version 1, and is unaffected by the new<br>
+ heap versions generated since then.<br>
+<br>
+As an aside, ``MemoryAccess`` is a ``Value`` mostly for convenience; it's not<br>
+meant to interact with LLVM IR.<br>
+<br>
+Design of MemorySSA<br>
+===================<br>
+<br>
+``MemorySSA`` is an analysis that can be built for any arbitrary function. When<br>
+it's built, it does a pass over the function's IR in order to build up its<br>
+mapping of ``MemoryAccess`` es. You can then query ``MemorySSA`` for things like<br>
+the dominance relation between ``MemoryAccess`` es, and get the ``MemoryAccess``<br>
+for any given ``Instruction`` .<br>
+<br>
+When ``MemorySSA`` is done building, it also hands you a ``MemorySSAWalker``<br>
+that you can use (see below).<br>
+<br>
+<br>
+The walker<br>
+----------<br>
+<br>
+A structure that helps ``MemorySSA`` do its job is the ``MemorySSAWalker``, or<br>
+the walker, for short. The goal of the walker is to provide answers to clobber<br>
+queries beyond what's represented directly by ``MemoryAccess`` es. For example,<br>
+given:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ %a = alloca i8<br>
+ %b = alloca i8<br>
+<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %a<br>
+ ; 2 = MemoryDef(1)<br>
+ store i8 0, i8* %b<br>
+ }<br>
+<br>
+The store to ``%a`` is clearly not a clobber for the store to ``%b``. It would<br>
+be the walker's goal to figure this out, and return ``liveOnEntry`` when queried<br>
+for the clobber of ``MemoryAccess`` ``2``.<br>
+<br>
+By default, ``MemorySSA`` provides a walker that can optimize ``MemoryDef`` s<br>
+and ``MemoryUse`` s by consulting alias analysis. Walkers were built to be<br>
+flexible, though, so it's entirely reasonable (and expected) to create more<br>
+specialized walkers (e.g. one that queries ``GlobalsAA``).<br><br></blockquote><div><br></div></div></div><div>How is querying GlobalsAA different from "consulting alias analysis"?</div><div>Did you mean that the default walker queries some pre-defined AA, but you can create a walker that queries a different one? </div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+<br>
+Locating clobbers yourself<br>
+^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
+<br>
+If you choose to make your own walker, you can find the clobber for a<br>
+``MemoryAccess`` by walking every ``MemoryDef`` that dominates said<br>
+``MemoryAccess``. The structure of ``MemoryDef`` s makes this relatively simple;<br>
+they ultimately form a linked list of every clobber that dominates the<br>
+``MemoryAccess`` that you're trying to optimize. In other words, the<br>
+``definingAccess`` of a ``MemoryDef`` is always the nearest dominating<br>
+``MemoryDef`` or ``MemoryPhi`` of said ``MemoryDef``.<br>
+<br>
+<br>
+Use optimization<br>
+----------------<br>
+<br>
+``MemorySSA`` will optimize some ``MemoryAccess`` es at build-time.<br>
+Specifically, we optimize the operand of every ``MemoryUse`` s to point to the<br>
+actual clobber of said ``MemoryUse``. This can be seen in the above example; the<br>
+second ``MemoryUse`` in ``if.end`` has an operand of ``1``, which is a<br>
+``MemoryDef`` from the entry block. This is done to make walking,<br>
+value numbering, etc, faster and easier.<br>
+It is not possible to optimize ``MemoryDef`` in the same way, as we<br>
+restrict ``MemorySSA`` to one heap variable and, thus, one Phi node<br>
+per block.<br>
+<br>
+<br>
+Invalidation and updating<br>
+-------------------------<br>
+<br>
+Because ``MemorySSA`` keeps track of LLVM IR, it needs to be updated whenever<br>
+the IR is updated. "Update", in this case, includes the addition, deletion, and<br>
+motion of IR instructions. The update API is being made on an as-needed basis.<br>
+<br>
+<br>
+Phi placement<br>
+^^^^^^^^^^^^^<br>
+<br>
+``MemorySSA`` only places ``MemoryPhi`` s where they're actually<br>
+needed. That is, it is a pruned SSA form, like LLVM's SSA form. For<br>
+example, consider:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ entry:<br>
+ %p1 = alloca i8<br>
+ %p2 = alloca i8<br>
+ %p3 = alloca i8<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %p3<br>
+ br label %while.cond<br>
+<br>
+ while.cond:<br>
+ ; 3 = MemoryPhi({%0,1},{if.end,2})<br>
+ br i1 undef, label %if.then, label %if.else<br>
+<br>
+ if.then:<br>
+ br label %if.end<br>
+<br>
+ if.else:<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ ; MemoryUse(1)<br>
+ %1 = load i8, i8* %p1<br>
+ ; 2 = MemoryDef(3)<br>
+ store i8 2, i8* %p2<br>
+ ; MemoryUse(1)<br>
+ %2 = load i8, i8* %p3<br>
+ br label %while.cond<br>
+ }<br>
+<br>
+Because we removed the stores from ``if.then`` and ``if.else``, a ``MemoryPhi``<br>
+for ``if.end`` would be pointless, so we don't place one. So, if you need to<br>
+place a ``MemoryDef`` in ``if.then`` or ``if.else``, you'll need to also create<br>
+a ``MemoryPhi`` for ``if.end``.<br>
+<br>
+If it turns out that this is a large burden, we can just place ``MemoryPhi`` s<br>
+everywhere. Because we have Walkers that are capable of optimizing above said<br>
+phis, doing so shouldn't prohibit optimizations.<br>
+<br>
+<br>
+Non-Goals<br>
+---------<br>
+<br>
+``MemorySSA`` is meant to reason about the relation between memory<br>
+operations, and enable quicker querying.<br>
+It isn't meant to be the single source of truth for all potential memory-related<br>
+optimizations. Specifically, care must be taken when trying to use ``MemorySSA``<br>
+to reason about atomic or volatile operations, as in:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define i8 @foo(i8* %a) {<br>
+ entry:<br>
+ br i1 undef, label %if.then, label %if.end<br>
+<br>
+ if.then:<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ %0 = load volatile i8, i8* %a<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ %av = phi i8 [0, %entry], [%0, %if.then]<br>
+ ret i8 %av<br>
+ }<br>
+<br>
+Going solely by ``MemorySSA``'s analysis, hoisting the ``load`` to ``entry`` may<br>
+seem legal. Because it's a volatile load, though, it's not.<br>
+<br>
+<br>
+Design tradeoffs<br>
+----------------<br>
+<br>
+Precision<br>
+^^^^^^^^^<br>
+``MemorySSA`` in LLVM deliberately trades off precision for speed.<br>
+Let us think about memory variables as if they were disjoint partitions of the<br>
+heap (that is, if you have one variable, as above, it represents the entire<br>
+heap, and if you have multiple variables, each one represents some<br>
+disjoint portion of the heap)<br>
+<br>
+First, because alias analysis results conflict with each other, and<br>
+each result may be what an analysis wants (IE<br>
+TBAA may say no-alias, and something else may say must-alias), it is<br>
+not possible to partition the heap the way every optimization wants.<br></blockquote><div><br></div></div></div><div>I think the start and the end of this sentence are orthogonal.</div><div>It's true that different optimizations may want different levels of precision, but I don't think must-alias/no-alias conflicts are a good motivations. Ideally, two correct alias analyses should not return conflicting results. The idea behind the old AA stack is that we have a lattice where "May < No" and "May < Must", and going through the stack only moves you upwards. TBAA is a special case, because we sort-of "ignore" TBAA when not in strict-aliasing mode, but I think, conceptually, the right way to look at this is that w/o strict-aliasing, the TBAA no-alias is "wrong". </div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+Second, some alias analysis results are not transitive (IE A noalias B,<br>
+and B noalias C, does not mean A noalias C), so it is not possible to<br>
+come up with a precise partitioning in all cases without variables to<br>
+represent every pair of possible aliases. Thus, partitioning<br>
+precisely may require introducing at least N^2 new virtual variables,<br>
+phi nodes, etc.<br>
+<br>
+Each of these variables may be clobbered at multiple def sites.<br>
+<br>
+To give an example, if you were to split up struct fields into<br>
+individual variables, all aliasing operations that may-def multiple struct<br>
+fields, will may-def more than one of them. This is pretty common (calls,<br>
+copies, field stores, etc).<br>
+<br>
+Experience with SSA forms for memory in other compilers has shown that<br>
+it is simply not possible to do this precisely, and in fact, doing it<br>
+precisely is not worth it, because now all the optimizations have to<br>
+walk tons and tons of virtual variables and phi nodes.<br>
+<br>
+So we partition. At the point at which you partition, again,<br>
+experience has shown us there is no point in partitioning to more than<br>
+one variable. It simply generates more IR, and optimizations still<br>
+have to query something to disambiguate further anyway.<br>
+<br>
+As a result, LLVM partitions to one variable.<br>
+<br>
+Use Optimization<br>
+^^^^^^^^^^^^^^^^<br>
+<br>
+Unlike other partitioned forms, LLVM's ``MemorySSA`` does make one<br>
+useful guarantee - all loads are optimized to point at the thing that<br>
+actually clobbers them. This gives some nice properties. For example,<br>
+for a given store, you can find all loads actually clobbered by that<br>
+store by walking the immediate uses of the store.<br>
<br>
Modified: llvm/trunk/docs/index.rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/index.rst?rev=278875&r1=278874&r2=278875&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/index.rs<wbr>t?rev=278875&r1=278874&r2=2788<wbr>75&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/index.rst (original)<br>
+++ llvm/trunk/docs/index.rst Tue Aug 16 19:17:29 2016<br>
@@ -235,6 +235,7 @@ For API clients and LLVM developers.<br>
:hidden:<br>
<br>
AliasAnalysis<br>
+ MemorySSA<br>
BitCodeFormat<br>
BlockFrequencyTerminology<br>
BranchWeightMetadata<br>
@@ -291,6 +292,9 @@ For API clients and LLVM developers.<br>
Information on how to write a new alias analysis implementation or how to<br>
use existing analyses.<br>
<br>
+:doc:`MemorySSA`<br>
+ Information about the MemorySSA utility in LLVM, as well as how to use it.<br>
+<br>
:doc:`GarbageCollection`<br>
The interfaces source-language compilers should use for compiling GC'd<br>
programs.<br>
<br>
<br>
______________________________<wbr>_________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-commits</a><br>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div></div></div>