<div dir="ltr">Thanks!</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 16, 2016 at 6:59 PM, George Burgess IV <span dir="ltr"><<a href="mailto:george.burgess.iv@gmail.com" target="_blank">george.burgess.iv@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">That's what I get for trying to update the examples without running them through `opt` again. :)<div><br></div><div>Thanks for the feedback! Fixed in r278885.</div><div><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 16, 2016 at 6:47 PM, Sean Silva <span dir="ltr"><<a href="mailto:chisophugis@gmail.com" target="_blank">chisophugis@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">This is great! Thanks for doing this. Some small comments inline.<br><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Tue, Aug 16, 2016 at 5:17 PM, George Burgess IV via llvm-commits <span dir="ltr"><<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Author: gbiv<br>
Date: Tue Aug 16 19:17:29 2016<br>
New Revision: 278875<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=278875&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject?rev=278875&view=rev</a><br>
Log:<br>
[Docs] Add initial MemorySSA documentation.<br>
<br>
Patch partially by Danny.<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D23535" rel="noreferrer" target="_blank">https://reviews.llvm.org/D2353<wbr>5</a><br>
<br>
Added:<br>
llvm/trunk/docs/MemorySSA.rst<br>
Modified:<br>
llvm/trunk/docs/AliasAnalysis.<wbr>rst<br>
llvm/trunk/docs/index.rst<br>
<br>
Modified: llvm/trunk/docs/AliasAnalysis.<wbr>rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/AliasAnalysis.rst?rev=278875&r1=278874&r2=278875&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/AliasAna<wbr>lysis.rst?rev=278875&r1=278874<wbr>&r2=278875&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/AliasAnalysis.<wbr>rst (original)<br>
+++ llvm/trunk/docs/AliasAnalysis.<wbr>rst Tue Aug 16 19:17:29 2016<br>
@@ -702,6 +702,12 @@ algorithm will have a lower number of ma<br>
Memory Dependence Analysis<br>
==========================<br>
<br>
+.. note::<br>
+<br>
+ We are currently in the process of migrating things from<br>
+ ``MemoryDependenceAnalysis`` to :doc:`MemorySSA`. Please try to use<br>
+ that instead.<br>
+<br>
If you're just looking to be a client of alias analysis information, consider<br>
using the Memory Dependence Analysis interface instead. MemDep is a lazy,<br>
caching layer on top of alias analysis that is able to answer the question of<br>
<br>
Added: llvm/trunk/docs/MemorySSA.rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/MemorySSA.rst?rev=278875&view=auto" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/MemorySS<wbr>A.rst?rev=278875&view=auto</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/MemorySSA.rst (added)<br>
+++ llvm/trunk/docs/MemorySSA.rst Tue Aug 16 19:17:29 2016<br>
@@ -0,0 +1,358 @@<br>
+=========<br>
+MemorySSA<br>
+=========<br>
+<br>
+.. contents::<br>
+ :local:<br>
+<br>
+Introduction<br>
+============<br>
+<br>
+``MemorySSA`` is an analysis that allows us to cheaply reason about the<br>
+interactions between various memory operations. Its goal is to replace<br>
+``MemoryDependenceAnalysis`` for most (if not all) use-cases. This is because,<br>
+unless you're very careful, use of ``MemoryDependenceAnalysis`` can easily<br>
+result in quadratic-time algorithms in LLVM. Additionally, ``MemorySSA`` doesn't<br>
+have as many arbitrary limits as ``MemoryDependenceAnalysis``, so you should get<br>
+better results, too.<br>
+<br>
+At a high level, one of the goals of ``MemorySSA`` is to provide an SSA based<br>
+form for memory, complete with def-use and use-def chains, which<br>
+enables users to quickly find may-def and may-uses of memory operations.<br>
+It can also be thought of as a way to cheaply give versions to the complete<br>
+state of heap memory, and associate memory operations with those versions.<br>
+<br>
+This document goes over how ``MemorySSA`` is structured, and some basic<br>
+intuition on how ``MemorySSA`` works.<br>
+<br>
+A paper on MemorySSA (with notes about how it's implemented in GCC) `can be<br>
+found here <<a href="http://www.airs.com/dnovillo/Papers/mem-ssa.pdf" rel="noreferrer" target="_blank">http://www.airs.com/dnovillo/<wbr>Papers/mem-ssa.pdf</a>>`_. Though, it's<br>
+relatively out-of-date; the paper references multiple heap partitions, but GCC<br>
+eventually swapped to just using one, like we now have in LLVM. Like<br>
+GCC's, LLVM's MemorySSA is intraprocedural.<br>
+<br>
+<br>
+MemorySSA Structure<br>
+===================<br>
+<br>
+MemorySSA is a virtual IR. After it's built, ``MemorySSA`` will contain a<br>
+structure that maps ``Instruction`` s to ``MemoryAccess`` es, which are<br>
+``MemorySSA``'s parallel to LLVM ``Instruction`` s.<br>
+<br>
+Each ``MemoryAccess`` can be one of three types:<br>
+<br>
+- ``MemoryPhi``<br>
+- ``MemoryUse``<br>
+- ``MemoryDef``<br>
+<br>
+``MemoryPhi`` s are ``PhiNode`` , but for memory operations. If at any<br>
+point we have two (or more) ``MemoryDef`` s that could flow into a<br>
+``BasicBlock``, the block's top ``MemoryAccess`` will be a<br>
+``MemoryPhi``. As in LLVM IR, ``MemoryPhi`` s don't correspond to any<br>
+concrete operation. As such, you can't look up a ``MemoryPhi`` with an<br>
+``Instruction`` (though we do allow you to do so with a<br>
+``BasicBlock``).<br>
+<br>
+Note also that in SSA, Phi nodes merge must-reach definitions (that<br>
+is, definite new versions of variables). In MemorySSA, PHI nodes merge<br>
+may-reach definitions (that is, until disambiguated, the versions that<br>
+reach a phi node may or may not clobber a given variable)<br>
+<br>
+``MemoryUse`` s are operations which use but don't modify memory. An example of<br>
+a ``MemoryUse`` is a ``load``, or a ``readonly`` function call.<br>
+<br>
+``MemoryDef`` s are operations which may either modify memory, or which<br>
+otherwise clobber memory in unquantifiable ways. Examples of ``MemoryDef`` s<br>
+include ``store`` s, function calls, ``load`` s with ``acquire`` (or higher)<br>
+ordering, volatile operations, memory fences, etc.<br>
+<br>
+Every function that exists has a special ``MemoryDef`` called ``liveOnEntry``.<br>
+It dominates every ``MemoryAccess`` in the function that ``MemorySSA`` is being<br>
+run on, and implies that we've hit the top of the function. It's the only<br>
+``MemoryDef`` that maps to no ``Instruction`` in LLVM IR. Use of<br>
+``liveOnEntry`` implies that the memory being used is either undefined or<br>
+defined before the function begins.<br>
+<br>
+An example of all of this overlayed on LLVM IR (obtained by running ``opt<br>
+-passes='print<memoryssa>' -disable-output`` on an ``.ll`` file) is below. When<br>
+viewing this example, it may be helpful to view it in terms of clobbers. The<br>
+operands of a given ``MemoryAccess`` are all (potential) clobbers of said<br>
+MemoryAccess, and the value produced by a ``MemoryAccess`` can act as a clobber<br>
+for other ``MemoryAccess`` es. Another useful way of looking at it is in<br>
+terms of heap versions. In that view, operands of of a given<br>
+``MemoryAccess`` are the version of the heap before the operation, and<br>
+if the access produces a value, the value is the new version of the heap<br>
+after the operation.<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ entry:<br>
+ %p1 = alloca i8<br>
+ %p2 = alloca i8<br>
+ %p3 = alloca i8<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %p3<br>
+ br label %while.cond<br>
+<br>
+ while.cond:<br>
+ ; 6 = MemoryPhi({%0,1},{if.end,4})<br></blockquote><div><br></div></div></div><div>I think `%0` should be `entry` here.</div><span><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+ br i1 undef, label %if.then, label %if.else<br>
+<br>
+ if.then:<br>
+ ; 2 = MemoryDef(6)<br>
+ store i8 0, i8* %p1<br>
+ br label %if.end<br>
+<br>
+ if.else:<br>
+ ; 3 = MemoryDef(6)<br>
+ store i8 1, i8* %p2<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ ; 5 = MemoryPhi({if.then,2},{if.then<wbr>,3})<br></blockquote><div><br></div></span><div>Should this be `{if.else,3}`?</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+ ; MemoryUse(5)<br>
+ %1 = load i8, i8* %p1<br>
+ ; 4 = MemoryDef(5)<br>
+ store i8 2, i8* %p2<br>
+ ; MemoryUse(1)<br>
+ %2 = load i8, i8* %p3<br>
+ br label %while.cond<br>
+ }<br>
+<br>
+The ``MemorySSA`` IR is located comments that precede the instructions they map<br></blockquote><div><br></div></span><div>This sounds like it should say: The ``MemorySSA`` IR comments are located such that they precede ...</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+to (if such an instruction exists). For example, ``1 = MemoryDef(liveOnEntry)``<br>
+is a ``MemoryAccess`` (specifically, a ``MemoryDef``), and it describes the LLVM<br>
+instruction ``store i8 0, i8* %p3``. Other places in ``MemorySSA`` refer to this<br>
+particular ``MemoryDef`` as ``1`` (much like how one can refer to ``load i8, i8*<br>
+%p1`` in LLVM with ``%1``). Again, ``MemoryPhi`` s don't correspond to any LLVM<br>
+Instruction, so the line directly below a ``MemoryPhi`` isn't special.<br>
+<br>
+Going from the top down:<br>
+<br>
+- ``6 = MemoryPhi({%0,1},{if.end,4})`` notes that, when entering ``while.cond``,<br></blockquote><div><br></div></span><div>`%0` -> `entry` here also I think.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+ the reaching definition for it is either ``1`` or ``4``. This ``MemoryPhi`` is<br>
+ referred to in the textual IR by the number ``6``.<br>
+- ``2 = MemoryDef(6)`` notes that ``store i8 0, i8* %p1`` is a definition,<br>
+ and its reaching definition before it is ``6``, or the ``MemoryPhi`` after<br>
+ ``while.cond``.<br>
+- ``3 = MemoryDef(6)`` notes that ``store i8 0, i8* %p2`` is a definition; its<br>
+ reaching definition is also ``6``.<br>
+- ``5 = MemoryPhi({if.then,2},{if.then<wbr>,3})`` notes that the clobber before<br></blockquote><div><br></div></span><div>`{if.else,3}` I think.<br></div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+ this block could either be ``2`` or ``3``.<br>
+- ``MemoryUse(5)`` notes that ``load i8, i8* %p1`` is a use of memory, and that<br>
+ it's clobbered by ``5``.<br>
+- ``4 = MemoryDef(5)`` notes that ``store i8 2, i8* %p2`` is a definition; it's<br>
+ reaching definition is ``5``.<br>
+- ``MemoryUse(1)`` notes that ``load i8, i8* %p3`` is just a user of memory,<br>
+ and the last thing that could clobber this use is above ``while.cond`` (e.g.<br>
+ the store to ``%p3``). In heap versioning parlance, it really<br>
+ only depends on the heap version 1, and is unaffected by the new<br>
+ heap versions generated since then.<br>
+<br>
+As an aside, ``MemoryAccess`` is a ``Value`` mostly for convenience; it's not<br>
+meant to interact with LLVM IR.<br>
+<br>
+Design of MemorySSA<br>
+===================<br>
+<br>
+``MemorySSA`` is an analysis that can be built for any arbitrary function. When<br>
+it's built, it does a pass over the function's IR in order to build up its<br>
+mapping of ``MemoryAccess`` es. You can then query ``MemorySSA`` for things like<br>
+the dominance relation between ``MemoryAccess`` es, and get the ``MemoryAccess``<br>
+for any given ``Instruction`` .<br>
+<br>
+When ``MemorySSA`` is done building, it also hands you a ``MemorySSAWalker``<br>
+that you can use (see below).<br>
+<br>
+<br>
+The walker<br>
+----------<br>
+<br>
+A structure that helps ``MemorySSA`` do its job is the ``MemorySSAWalker``, or<br>
+the walker, for short. The goal of the walker is to provide answers to clobber<br>
+queries beyond what's represented directly by ``MemoryAccess`` es. For example,<br>
+given:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ %a = alloca i8<br>
+ %b = alloca i8<br>
+<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %a<br>
+ ; 2 = MemoryDef(1)<br>
+ store i8 0, i8* %b<br>
+ }<br>
+<br>
+The store to ``%a`` is clearly not a clobber for the store to ``%b``. It would<br>
+be the walker's goal to figure this out, and return ``liveOnEntry`` when queried<br>
+for the clobber of ``MemoryAccess`` ``2``.<br>
+<br>
+By default, ``MemorySSA`` provides a walker that can optimize ``MemoryDef`` s<br>
+and ``MemoryUse`` s by consulting alias analysis. Walkers were built to be<br>
+flexible, though, so it's entirely reasonable (and expected) to create more<br>
+specialized walkers (e.g. one that queries ``GlobalsAA``).<br>
+<br>
+<br>
+Locating clobbers yourself<br>
+^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
+<br>
+If you choose to make your own walker, you can find the clobber for a<br>
+``MemoryAccess`` by walking every ``MemoryDef`` that dominates said<br>
+``MemoryAccess``. The structure of ``MemoryDef`` s makes this relatively simple;<br>
+they ultimately form a linked list of every clobber that dominates the<br>
+``MemoryAccess`` that you're trying to optimize. In other words, the<br>
+``definingAccess`` of a ``MemoryDef`` is always the nearest dominating<br>
+``MemoryDef`` or ``MemoryPhi`` of said ``MemoryDef``.<br>
+<br>
+<br>
+Use optimization<br>
+----------------<br>
+<br>
+``MemorySSA`` will optimize some ``MemoryAccess`` es at build-time.<br>
+Specifically, we optimize the operand of every ``MemoryUse`` s to point to the<br></blockquote><div><br></div></div></div><div>MemoryUse should be non-plural here.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+actual clobber of said ``MemoryUse``. This can be seen in the above example; the<br>
+second ``MemoryUse`` in ``if.end`` has an operand of ``1``, which is a<br>
+``MemoryDef`` from the entry block. This is done to make walking,<br>
+value numbering, etc, faster and easier.<br>
+It is not possible to optimize ``MemoryDef`` in the same way, as we<br>
+restrict ``MemorySSA`` to one heap variable and, thus, one Phi node<br>
+per block.<br>
+<br>
+<br>
+Invalidation and updating<br>
+-------------------------<br>
+<br>
+Because ``MemorySSA`` keeps track of LLVM IR, it needs to be updated whenever<br>
+the IR is updated. "Update", in this case, includes the addition, deletion, and<br>
+motion of IR instructions. The update API is being made on an as-needed basis.<br></blockquote><div><br></div></span><div>You may want to mention any examples of in-tree optimizations that currently do updating.</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
+<br>
+<br>
+Phi placement<br>
+^^^^^^^^^^^^^<br>
+<br>
+``MemorySSA`` only places ``MemoryPhi`` s where they're actually<br>
+needed. That is, it is a pruned SSA form, like LLVM's SSA form. For<br>
+example, consider:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define void @foo() {<br>
+ entry:<br>
+ %p1 = alloca i8<br>
+ %p2 = alloca i8<br>
+ %p3 = alloca i8<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ store i8 0, i8* %p3<br>
+ br label %while.cond<br>
+<br>
+ while.cond:<br>
+ ; 3 = MemoryPhi({%0,1},{if.end,2})<br>
+ br i1 undef, label %if.then, label %if.else<br>
+<br>
+ if.then:<br>
+ br label %if.end<br>
+<br>
+ if.else:<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ ; MemoryUse(1)<br>
+ %1 = load i8, i8* %p1<br>
+ ; 2 = MemoryDef(3)<br>
+ store i8 2, i8* %p2<br>
+ ; MemoryUse(1)<br>
+ %2 = load i8, i8* %p3<br>
+ br label %while.cond<br>
+ }<br>
+<br>
+Because we removed the stores from ``if.then`` and ``if.else``, a ``MemoryPhi``<br>
+for ``if.end`` would be pointless, so we don't place one. So, if you need to<br>
+place a ``MemoryDef`` in ``if.then`` or ``if.else``, you'll need to also create<br>
+a ``MemoryPhi`` for ``if.end``.<br>
+<br>
+If it turns out that this is a large burden, we can just place ``MemoryPhi`` s<br>
+everywhere. Because we have Walkers that are capable of optimizing above said<br>
+phis, doing so shouldn't prohibit optimizations.<br>
+<br>
+<br>
+Non-Goals<br>
+---------<br>
+<br>
+``MemorySSA`` is meant to reason about the relation between memory<br>
+operations, and enable quicker querying.<br>
+It isn't meant to be the single source of truth for all potential memory-related<br>
+optimizations. Specifically, care must be taken when trying to use ``MemorySSA``<br>
+to reason about atomic or volatile operations, as in:<br>
+<br>
+.. code-block:: llvm<br>
+<br>
+ define i8 @foo(i8* %a) {<br>
+ entry:<br>
+ br i1 undef, label %if.then, label %if.end<br>
+<br>
+ if.then:<br>
+ ; 1 = MemoryDef(liveOnEntry)<br>
+ %0 = load volatile i8, i8* %a<br>
+ br label %if.end<br>
+<br>
+ if.end:<br>
+ %av = phi i8 [0, %entry], [%0, %if.then]<br>
+ ret i8 %av<br>
+ }<br>
+<br>
+Going solely by ``MemorySSA``'s analysis, hoisting the ``load`` to ``entry`` may<br>
+seem legal. Because it's a volatile load, though, it's not.<br>
+<br>
+<br>
+Design tradeoffs<br>
+----------------<br>
+<br>
+Precision<br>
+^^^^^^^^^<br>
+``MemorySSA`` in LLVM deliberately trades off precision for speed.<br>
+Let us think about memory variables as if they were disjoint partitions of the<br>
+heap (that is, if you have one variable, as above, it represents the entire<br>
+heap, and if you have multiple variables, each one represents some<br>
+disjoint portion of the heap)<br>
+<br>
+First, because alias analysis results conflict with each other, and<br>
+each result may be what an analysis wants (IE<br>
+TBAA may say no-alias, and something else may say must-alias), it is<br>
+not possible to partition the heap the way every optimization wants.<br>
+Second, some alias analysis results are not transitive (IE A noalias B,<br>
+and B noalias C, does not mean A noalias C), so it is not possible to<br>
+come up with a precise partitioning in all cases without variables to<br>
+represent every pair of possible aliases. Thus, partitioning<br>
+precisely may require introducing at least N^2 new virtual variables,<br>
+phi nodes, etc.<br>
+<br>
+Each of these variables may be clobbered at multiple def sites.<br>
+<br>
+To give an example, if you were to split up struct fields into<br>
+individual variables, all aliasing operations that may-def multiple struct<br>
+fields, will may-def more than one of them. This is pretty common (calls,<br>
+copies, field stores, etc).<br>
+<br>
+Experience with SSA forms for memory in other compilers has shown that<br>
+it is simply not possible to do this precisely, and in fact, doing it<br>
+precisely is not worth it, because now all the optimizations have to<br>
+walk tons and tons of virtual variables and phi nodes.<br>
+<br>
+So we partition. At the point at which you partition, again,<br>
+experience has shown us there is no point in partitioning to more than<br>
+one variable. It simply generates more IR, and optimizations still<br>
+have to query something to disambiguate further anyway.<br>
+<br>
+As a result, LLVM partitions to one variable.<br>
+<br>
+Use Optimization<br>
+^^^^^^^^^^^^^^^^<br>
+<br>
+Unlike other partitioned forms, LLVM's ``MemorySSA`` does make one<br>
+useful guarantee - all loads are optimized to point at the thing that<br>
+actually clobbers them. This gives some nice properties. For example,<br>
+for a given store, you can find all loads actually clobbered by that<br>
+store by walking the immediate uses of the store.<br>
<br>
Modified: llvm/trunk/docs/index.rst<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/index.rst?rev=278875&r1=278874&r2=278875&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/docs/index.rs<wbr>t?rev=278875&r1=278874&r2=2788<wbr>75&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/docs/index.rst (original)<br>
+++ llvm/trunk/docs/index.rst Tue Aug 16 19:17:29 2016<br>
@@ -235,6 +235,7 @@ For API clients and LLVM developers.<br>
:hidden:<br>
<br>
AliasAnalysis<br>
+ MemorySSA<br>
BitCodeFormat<br>
BlockFrequencyTerminology<br>
BranchWeightMetadata<br>
@@ -291,6 +292,9 @@ For API clients and LLVM developers.<br>
Information on how to write a new alias analysis implementation or how to<br>
use existing analyses.<br>
<br>
+:doc:`MemorySSA`<br>
+ Information about the MemorySSA utility in LLVM, as well as how to use it.<br>
+<br>
:doc:`GarbageCollection`<br>
The interfaces source-language compilers should use for compiling GC'd<br>
programs.<br>
<br>
<br>
______________________________<wbr>_________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-commits</a><br>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>