[llvm-dev] [RFC] PT.2 Add IR level interprocedural outliner for code size.
Davide Italiano via llvm-dev
llvm-dev at lists.llvm.org
Wed Sep 27 18:22:40 PDT 2017
On Wed, Sep 27, 2017 at 6:07 PM, Matthias Braun <mbraun at apple.com> wrote:
> On Sep 27, 2017, at 3:23 PM, Davide Italiano via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> On Wed, Sep 27, 2017 at 9:28 AM, Jessica Paquette via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> I think that, given previous discussion on the topic, we might want a split
> like this:
> (1) Search structure
> Suffix tree or suffix array.
> (2) Numbering/mapping/congruence scheme
> Every outliner should implement a function that maps instructions (or
> whatever structure you want to outline, crazy thoughts…) to integers. That
> should be passed to the search structure, which will return a list of
> repeated sequences.
> The MachineOutliner currently has an “InstructionMapper” struct in it. I
> think we can probably build on that struct so that we can choose different
> mapping schemes.
> (3) Cost model/candidate selection scheme
> Every outliner should implement a function that returns the outlining cost
> or benefit of a candidate. In the MachineOutliner, this is a joint effort
> between the pass and the target.
> (4) Outlining scheme
> Every outliner should implement a function that actually performs outlining.
> That is, we should have a function that replaces each instance of a repeated
> sequence of instructions with a call to a newly-created function. In the
> MachineOutliner, the method by which we do this is decided on by the target.
> So, we at the very least might want an interface that implements something
> unsigned mapInstructions(…);
> unsigned getOutliningCost(…);
> unsigned findCandidates(…);
> bool outline(…);
> Hi Jessica, I tend to agree we should try to maximize code reuse in
> this context, and I think yours is a good step forward. In particular,
> I'm particularly confident the `findCandidates()` bits should be
> That said, do we really want encapsulate the logic for finding
> candidates into an interface? It's unclear whether it should live but
> it seems much more akin to the stuff living in Transforms/Utils than a
> proper interface.
> So, IMHO it's much more suitable for an helper/utility.
> For what it concerns `mapInstructions()` [please correct me if I'm wrong].
> This bit of the interface should be responsible for value numbering
> the instructions, as far as I can tell.
> To the best of my understanding the way your outliner value numbers
> using a nominal approach, i.e. two instructions get the same value
> number if their string representation is the same (or something like
> With the notion above,
> xorl %eax %eax
> movl %eax, $0
> aren't really considered equivalent (although, in practice, they are).
> The way I imagined outlining to work at the IR level (and how I once
> discussed with Dan) is leveraging the result of a value number
> analysis (NewGVN) to assign numbers.
> While the problem NewGVN solves is that of finding all the Herbrand
> equivalences (structural equivalence, same operation and operand
> structurally congruent), it also does a canonicalization step calling
> InstSimplify to canonicalize (so, in this sense it catches more
> stuffs, e.g.:
> %x = add %a, 0
> %y = sub %b, 0
> not equivalent
> instSimplify(%y) -> add %a, 0
> now they're equivalent.
> Note: The current IR outlliner doesn't really use the approach I just
> described, yet, but it's not hard to imagine extending (or rewriting
> it to use a real Global Value number analysis to get instructions that
> are equivalent, for some definition of structural equivalency)
> I would also note that we already have InstCombine, GVN and other passes.
> Right now I see little value in teaching the outliner about those
> equivalences if we can just rely on earlier passes to normalize. This case
> also doesn't look like a pass ordering problems as I don't see the outliner
> enabling further optimizations in other passes.
> - Matthias
I'm in agreement with you, for the outliner working in the mid-level optimizer.
FWIW, if you get the congruence classes formed by a value numbering
analysis you should get these for free.
Teaching the outliner how to catch congruences as-it-goes instead of
relying on an analysis is madness, and not really fruitful, IMHO. My
question is more how you can get the same effect when you're working
in the backend as the representation you're working on doesn't really
contain that much information.
I guess the answer it's just it's really hard without a Global VN to
understand if `xor %eax, %eax` and `andl %eax, $0` are the same, and
you don't have that information that far in the pipeline.
I'm not an expert in code generation so I may be completely off on
this one, but I assume that you somehow rely on SelectionDAG (e.g.
DAGCombine) to do the canonicalization similarly to what you would do
with InstCombine in the IR optimizer, and cases where you emit
equivalent instructions that have different encodings is relatively
rare, (in the case above, you always emit the XOR idiom).
"There are no solved problems; there are only problems that are more
or less solved" -- Henri Poincare
More information about the llvm-dev