[llvm-dev] [RFC] Tablegen-erated GlobalISel Combine Rules

Daniel Sanders via llvm-dev llvm-dev at lists.llvm.org
Thu Nov 8 17:42:44 PST 2018


Hi All,

I've been working on the GlobalISel combiner recently and I'd like to share the plan for how Combine Rules will be defined in GlobalISel and solicit feedback on it.

This email ended up rather long so:
	TL;DR: We're planning to define GlobalISel Combine Rules using MIR syntax with a few bits glued on to interface with the algorithm and escape into C++ when we need to. Eventually, ISel rules may follow suit.

Thanks to the following people for their help in playing with this syntax and discussing it with me over the last couple weeks:
Aditya Nandakumar
Adrian Prantl
Ahmed Bougacha
Amara Emerson
Jessica Paquette
Michael Berg
Justin Bogner
Roman Tereshin
Vedant Kumar
Volkan Keles
Their feedback has vastly improved this over the original version and it was through those discussions that the idea of re-using MIR as a declarative definition was born.

Also thanks to the person who raised the issue of debug info at the BoF. I'm afraid I didn't write your name in my notes on the day so hopefully you'll make yourself known so we can give you the credit you're due. The debug info part of the proposal is still a little hand-wavy but it wouldn't have been here at all without you :-)

Scope of this RFC

This RFC is intended to solicit feedback on the syntax, semantics, and overall appearance of the tablegen classes and definitions used to define Combine Rules for GlobalISel and see if there are any flaws in the usability of such definitions. I'd like to keep the mechanics of the Combine algorithm fairly abstract for this discussion as a few things that are syntactically possible will require improvements to the combine algorithm before they can be used and I'd like to leave my thoughts on the algorithm to a later discussion once I have something more concrete. I'll label these areas as they come up.

Overview

GlobalISel requires some passes that replace groups of MachineInstrs with simpler equivalents just like DAGCombine does for SelectionDAG. DAGCombine as a whole is composed of multiple large monoliths of C++ code with one covering the target independent combines, and one for each target. While this gives a lot of power and freedom to do whatever is needed, such an architecture also makes several things difficult:
It's difficult to test an individual combine as others can modify the test case before it reaches the relevant rule.
It's difficult to debug due to the quantity of transformations being applied, the iterative nature of the algorithm, and the lack of scope-reducing techniques (e.g. bisection) beyond the top-level opcode.
It's difficult for a target to opt-out of or specialize a common combine. In-tree targets must inject triple checks, while out-of-tree targets must also pay the price of the resulting merge conflicts.
It's difficult to write a target specific combine with higher priority than one common combine but lower priority than another.
It's difficult to write analysis tools for the combines such as those to detect unreachable combines, or to prove them correct, etc.
It's difficult to investigate improvements to the overall algorithm as there's no means to change the code for all rules.
and so on

For these reasons and more it would be useful to have at least partial generation of the combine rules used by GlobalISel. In addition to this, the bulk of matching a combine is boilerplate. It would be useful to defer the boilerplate code to a generator and focus more on the details that make each combine unique.

To that end (and also to eventually drop the SelectionDAG importer for ISel), I've been looking into how we would declare Combine Rules in tablegen and with the help of the people credited above I think we've ended up somewhere that should be nice to work with and opens up a lot of possibilities.

Individual Combine Rules

A Simple Example

Here's a simple example that eliminates a redundant G_TRUNC.

def : GICombineRule<(defs root:$D, operand:$S),
                    (match [{MIR %1(s32) = G_ZEXT %S(s8)
                                 %D(s16) = G_TRUNC %1(s32) }]),
                    (apply [{MIR %D(s16) = G_ZEXT %S(s8) }])>;

There's three parts to this rule:
defs declares the interface for the rule. The definitions in the defs section are the glue between the Combine algorithm and the rule, as well as between the match and apply sections. In this case we only define the root of the match and that we have a register variable called $S.
match declares the conditions that must be true for the rule to fire. In this case, we require that we match a G_TRUNC whose result is of type s16 and whose operand is type s32 as well as a G_ZEXT whose result is the same as the G_TRUNC's operand, and with an operand of type s8. If all these are true, then we have the option of applying the rule using the apply section. Some algorithms may take every opportunity it sees but others might weigh up different possible choices more carefully.
apply declares the result of the rule firing. In this case, we'd build a G_ZEXT whose result is %D and whose operand is %S from the match.

Generally speaking, the Combine rule patterns declare what needs to happen and it's up to tablegen to choose how to achieve it. However, Combines can be rather complicated and a DSL cannot hope to cover everything that might be needed. As such there will be trapdoors into arbitrary C++.

The MIR match block matches by following the use-def edges and does not require that the MachineInstr's are in the exact order specified. It only requires that the instructions are connected to each other as described and that the input MIR is valid (e.g. no uses before defs). It can reach across basic blocks when it's safe to do so. We may need to introduce a means to limit the inter-basic-block reach as I can foresee situations where it may pull expensive operations into a loop if it's not careful about this. My current thinking on this is that it should be limited to moving to similar or cheaper blocks. I can potentially see some use in choosing the insertion point to select cheaper blocks however I think that's best left to passes like MachineLICM rather than having Combine try to do everything.

Syntactically in tablegen, the '[{MIR ... }]' is just an ordinary code block that happens to start with the string 'MIR'. The 'MIR' string is there for editors to use to switch syntax highlighting modes to MIR instead of C++.

Generalizing the Simple Example

Now let's generalize the match section a bit so that it handles all intermediate scalar types. All we need to do is delete the type from %1 like so:
    def : GICombineRule<(defs root:$D, operand:$S),
                        (match [{MIR %1 = G_ZEXT %S(s8)
                                     %D(s16) = G_TRUNC %1 }]),
                        (apply [{MIR %D(s16) = G_ZEXT %S(s8) }])>;
what we've done here is removed the type check on %1 entirely. This is safe for this particular rule since, by definition, G_ZEXT will always produce a larger scalar result type than its scalar input type and G_TRUNC will always produce a smaller one. The only way these can both be the case is if %1 is a a scalar of at least s17. We're still not covering every type combination that we ought to though since %S and %D are still restricted to s8 and s16 respectively. To fix this, we'll need some custom predicates.

A custom predicate is declared like so:
    def isScalarType : GIMatchPredicate<(ins type:$A), bool, [{
      return ${A}.isScalar();
    }]>;
this declares that the predicate takes a type (LLT) named A and returns a bool. It also declares the code used to test the predicate and the place that the variable for A must be inserted. We can potentially use types other than bool such as structures but we'll leave that for later. Completing the generalized example using this (and another) predicate, we get:
    def isLargerType : GIMatchPredicate<(ins type:$A, type:$B), bool, [{
      return ${A}.getSizeInBits() > ${B}.getSizeInBits();
    }]>;
    def : GICombineRule<(defs root:$D, operand:$S, operand:$T),
                        (match [{MIR %T = G_ZEXT %S
                                     %D = G_TRUNC %T }],
                               (isScalarType type:$D),
                               (isLargerType type:$D, type:$S),
                               (isLargerType type:$T, type:$S),
                               (isLargerType type:$T, type:$D)),
                        (apply [{MIR %D = G_ZEXT %S }])>;
We've declared a C++ predicate that takes checks whether a given type is a scalar. We've also declared another which checks that one type has more bits than another. We've then extended the defs for the rule with an additional operand which is used to name the intermediate operand to allow it to be referenced from C++ predicates. Lastly we've called the C++ predicates as part of the match. Each argument is of type 'operand' and needs to be provided as type 'type' so tablegen emits code to test for convertability (MO.isReg() in this case), and to do the conversion (MRI.getType(MO.getReg()) in this case). The end result is that this rule now supports all the type combinations where we extend to a larger type, then truncate to a type that lies between the intermediate and the source in size.

As noted earlier, we can simplify this to:
    def : GICombineRule<(defs root:$D, operand:$S),
                        (match [{MIR %0 = G_ZEXT %S
                                     %D = G_TRUNC %0 }],
                               (isScalarType type:$D),
                               (isLargerType type:$D, type:$S)),
                        (apply [{MIR %D = G_ZEXT %S }])>;
since the definition of G_ZEXT/G_TRUNC is such that $T > $S and $T > $D by definition.

Subtarget specific Rules

Facilities like rule predication based on subtarget features will work just as they do in SelectionDAG:
    def : GICombineRule<...>, Requires<Foo>;

Preserving DILocation and other debug info

This section is still fairly vague and has had less investigation than the others. Here's the current thinking on it though.

Because, we're using MIR it becomes fairly easy to represent debug line information in the rules themselves. Here we expand on the above example to merge the line info for the G_ZEXT and the G_TRUNC into the resulting G_ZEXT using DILocation::getMergedLocation() by simply specifying both debug-location metadata on the new instruction.

    def : GICombineRule<(defs root:$D, operand:$S),
                        (match [{MIR %0 = G_ZEXT %S, debug-location !1
                                     %D = G_TRUNC %0, debug-location !2 }],
                               (isScalarType type:$D),
                               (isLargerType type:$D, type:$S)),
                        (apply [{MIR %D = G_ZEXT %S, debug-location !1, debug-location !2 }])>;

If we add DBG_VALUE to this, then we get:
    def : GICombineRule<(defs root:$D, operand:$S, diexpression:$DExpr),
                        (match [{MIR %0 = G_ZEXT %S, debug-location !1
                                     %D = G_TRUNC %0, debug-location !2
                                     DBG_VALUE %0, !3, !metadata($DExpr), debug-location !4
                                     DBG_VALUE %D, !5, !6, debug-location !7
                                }],
                               (isScalarType type:$D),
                               (isLargerType type:$D, type:$S)),
                        (apply (createDIExprLLVMFragment diexpression:$DExpr, type:$D):$DNewExpr,
                               [{MIR %D = G_ZEXT %S, debug-location !1, debug-location !2
                                     DBG_VALUE %D, !3, !metadata($DNewExpr), debug-location !4
                                     DBG_VALUE %D, !5, !6, debug-location !7
                                }])>;

The semantics of DBG_VALUE in the match and apply sections are a little different from most instructions. They still match based on use-def edges rather than the exact order written like normal instructions. However, it's an optional match so that we don't fail to match due to missing DBG_VALUE's. It can also match multiple DBG_VALUE's if there happen to be multiple variables described by a single vreg. The apply step's DBG_VALUE creates DBG_VALUEs corresponding to each of those that were matched. createDIExprLLVMFragment invokes C++ code which creates a new DIExpression by concatenating a 'DW_OP_LLVM_fragment, 0, sizeof(type)' for the given type to the existing DIExpression.

One interesting benefit of this is that it allows us to verify that all DILocations and DBG_VALUE's are accounted for and handled in some way.

I should mention that I'm not terribly keen on the current syntax for modifying the diexpression above (createDIExprLLVMFragment). I feel we ought to be able to improve on that and make it more inline in the MIR.

Matching immediates and G_CONSTANT/G_FCONSTANT

In GlobalISel, an immediate is in one of three forms. There's int64_t for MachineOperand::isImm() and ConstantInt * for MachineOperand::isCImm() but those forms are uncommon and are only really seen on target instructions. The most common form is a vreg defined by G_CONSTANT or G_FCONSTANT. All three forms are handled by the 'imm' declaration.

Here we declare rule that matches X * 2 using a custom predicate to check the immediate is 2:
    def : GICombineRule<(defs root:$D, reg:$A, imm:$VAL),
                        (match [{MIR %D = G_MUL %A, %VAL }],
                               (isTwo imm:$VAL)),
                        (apply [{MIR %root = MYTGT_DOUBLE %A }])>;

Listing the C++ predicates like that will lead to a lot of repetitive and bug-prone code so it will also be possible to embed the predicates in custom variants of imm.
    def imm_2 : GIPredicatedDefKind<(isTwo imm)>;
    def : GICombineRule<(defs root:$D, reg:$A, imm_2:$VAL),
                        (match [{MIR %D = G_MUL %A, %VAL }])
                        (apply [{MIR %root = MYTGT_DOUBLE %A }])>;
These are effectively appended to the match section.

Just matching an immediate isn't enough though. We'd also want to be able to use them in the result and modify them. This works the same way as registers. Here's an example of using the immediate in the apply step in a rule that replaces 2 * (A + B) with 2A + 2B:
    def : GICombineRule<
      (defs root:$root, reg:$A, imm:$B, imm_2:$C),
      (match [{MIR
               %1 = G_ADD %A, %B
               %root = G_MUL %1, %C
             }]),
      (apply [{MIR %1 = G_ADD %A, %A
                   %2 = G_ADD %B, %B
                   %root = G_ADD %1, %2 }])>;
and of course, we'd also want to constant fold the 2B which is achieved in this rule by creating a new operand before we generate the new instructions:
    def : GICombineRule<
      (defs root:$root, reg:$A, imm:$B, imm_2:$C),
      (match [{MIR
               %1 = G_ADD %A, %B
               %root = G_MUL %1, %C
             }]),
      (apply (create_imm [{ 2 * ${B}->getZExtValue() }], apint_value:$B):$NB,
             [{MIR %1 = G_ADD %A, %A
                   %root = G_ADD %1, %NB }])>;
In this definition the (create_imm ...):$NB creates a new operand with MachineOperand::CreateImm(...) and names it $NB for use in the pattern. The code block inside the create_imm used as the argument to CreateImm(), and the apint_value:$B converts the operand to an APInt before being given to the code block much like the custom predicates did earlier.

Passing arbitrary data from match to apply

As mentioned earlier, the defs block defines the interface between the match and apply steps. This can be used to arrange for arbitrary data to be passed from match to apply. 

In the current AArch64PreLegalizeCombiner we have a rule that matches a G_LOAD and all its uses simultaneously and decides on the best way to rewrite it to minimize the sign/zero/any-extend operations. This rule passes a struct (PreferredTuple) between the current C++ equivalent for the match to the current C++ equivalent to the apply. Converting that into this tablegen syntax, we'd write:
    def extending_load_matchdata : GIDefMatchData<"PreferredTuple">;
    def extending_load_predicate : GIMatchPredicate<
         (ins reg:$A, extending_load_matchdata:$B), bool, [{
      return Helper.matchCombineExtendingLoads(${A}, ${matchinfo});
    }]>;
    def extending_loads : GICombineRule<
      (defs root:$root, reg:$A, extending_load_matchdata:$matchinfo),
      (match [{MIR %root = G_LOAD %A }],
             (extending_load_predicate root:$A,
                                       extending_load_matchdata:$matchinfo)),
      (apply (exec [{ Helper.applyCombineExtendingLoads(${root}, ${matchinfo}); }],
                   reg:$root, extending_load_matchdata:$matchinfo)>;

The GIDefMatchData declares a new type of data that can be passed from the match to the apply. Tablegen is responsible for arranging for appropriate storage during the Combine algorithm. The GIMatchPredicate declares a C++ predicate that fills out the PreferredTuple (passed by reference) whenever it returns true for a successful match. We could have made the predicate return std::pair<bool, PreferredTuple> instead but that's less efficient (it would be an sret return on many targets) and would require us to define the truthiness (no examples of are in this email as I expect it to be a rare thing to need) in order to act as a predicate. Normally, you'd feed this into a (create_imm ...) or a (create_operand ...) in the apply section. However, in this particular case the data being passed determines the entirety of the replacement so we escape into arbitrary C++ instead and arrange for the variables to be injected appropriately using the 'exec' operator.

Macros

I simplified the previous example a bit. Rather than only matching a G_LOAD, the current rule in AArch64 can match any of G_LOAD, G_SEXTLOAD, and G_ZEXTLOAD. We need some means to match one of several alternatives as well as collect and re-use common subpatterns. I've yet to look into how this would be practically implemented and this section is a bit vague as a result but here's the current thinking on how it should look and behave:
    def ANYLOAD : GIMacro<(defs def:$R, use:$S, uint64_t:$IDX),
                          (match (oneof [{MIR %R = G_LOAD %S}],
                                        [{MIR %R = G_SEXTLOAD %S}],
                                        [{MIR %R = G_ZEXTLOAD %S}]):$IDX>;
    def extending_loads : GICombineRule<
      (defs root:$root, reg:$A, extending_load_matchdata:$matchinfo, ANYLOAD:$ANYLOAD),
      (match [{MIR %root = ANYLOAD %A }],
             (extending_load_predicate root:$A,
                                       extending_load_matchdata:$matchinfo)),
      (apply (exec [{ Helper.applyCombineExtendingLoads(${root}, ${matchinfo}); }],
                   reg:$root, extending_load_matchdata:$matchinfo)>;
Effectively, we're declaring a fake instruction and import it into this rule, possibly renaming it using the argument name ($ANYLOAD in this case) to provide a more convenient name in the MIR block or to disambiguate multiple instances. Once we've parsed the MIR, we would recursively replace any instance of ANYLOAD with code match one of the alternatives. The variables in the 'defs' section of the macro would be available as $ANYLOAD_R, $ANYLOAD_S, and $ANYLOAD_IDX (we have to use '_' instead of a '.' to fit within tablegen's syntax) with $ANYLOAD_IDX indicating which alternative the (oneof ...) matched. When nested by including a macro in the macro's defs section and using it, the names to access the sub-macros variables would grow longer by underscore separated concatenation, for example '$ANYLOAD_ANYEXT_A'.

'Upside-down' matches (i.e. roots at the top) and similar

This one requires algorithm changes which I'd prefer not to discuss in this RFC. Assuming the underlying algorithm gains support for this, this is how the syntax would look:
def : GICombineRule<
  (defs root:$root, reg:$A),
  (match [{MIR %1 = G_LOAD %root
               %A = G_SEXT %1 }]),
  (apply [{MIR %A = G_SEXTLOAD %root }])>;

The only unusual thing about this rule is that the root isn't at the bottom. Instead of starting at a use and matching towards defs, we're starting at the def and matching towards uses. This has some potentially useful properties. The combine algorithm has to chose an insertion point for the replacement code and the traditional choice has been the root of the match. Assuming we keep doing the same thing, 'upside-down' matching like this is able to avoid the checks that the load is safe to move, is non-volatile, has one use, etc. that would be necessary if we moved the G_LOAD down to the G_SEXT. Also, assuming we keep the same broadly bottom-up processing order as the existing Combine algorithm this kind of rule also has relatively lower priority than conventional rules because the root is seen later. This can be useful as (algorithm-dependent) the MIR may be more settled by the time it tries to match.

Along the same lines, the syntax can potentially support the root being in the middle of a match. This could be used in a similar way to the upside-down match above to control the insertion point and priority. For example:
def : GICombineRule<
  (defs root:$root, reg:$A, reg:$B, reg:$C),
  (match [{MIR %1(s32) = G_TRUNC %A(s64)
               %2(s32) = G_TRUNC %B(s64)
               %root = G_ADD %1, %2
               %C(s64) = G_SEXT %root }]),
  (emit [{MIR %root = G_ADD %A, %B
              %C = G_SEXT_INREG %root }])>;
Unfortunately, I don't have any concrete examples where this would be useful in comparison to a conventional or upside-down match at the moment. I'm mostly keeping the door open as I can see potential for benefits (mostly w.r.t sinking and hoisting safety around an instr that would be difficult to test for that) given an appropriate rule and also because I'm inclined to say that the tablegen syntax shouldn't be the reason it's not possible. It should be up to the Combine algorithm and and tablegen-erate pass involved in specializing the algorithm for a target.

Multiple roots

This one requires algorithm changes which I'd prefer not to discuss in this RFC. Assuming the underlying algorithm gains support for this, this is how the syntax would look:
def : GICombineRule<
  (defs root:$root1, root:$root2, reg:$A, reg:$B),
  (match [{MIR %root1 = G_ADD %A, %B }],
         [{MIR %root2 = G_SUB %A, %B }]),
  (emit [{MIR %root1, %root2 = BUTTERFLY %A, %B }])>;
This would match a G_ADD and G_SUB with operands in common and combine them into a BUTTERFLY operation. You can think of this as two normal rules, one with %root1 as the root and the other with %root2 as the root.

Grouping and Ordering Rules

Combine rules are ordered and grouped using definitions of the form:
    def trivial_combines : GICombineGroup<[copy_prop]>;
    def combines_for_extload: GICombineGroup<[extending_loads]>; 
    def all_combines : GICombineGroup<[trivial_combines, combines_for_extload]>; 
Essentially, we create a long ordered list of all the combines. Tablegen is free to re-order rules so long as the resulting ruleset always behaves as if it were in this specific order. So for example, the list [sext_trunc_sext, zext_trunc_zext, sext_sext, zext_zexts] is free to re-order to [sext_trunc_sext, sext_sext, zext_trunc_zext, zext_zexts] because the rules involved are mutually exclusive due to the root opcode.

So why not make tablegen figure out the order like ISel does? ISel attempts to figure out an order using an scoring system called Complexity along with a hack (AddedComplexity) to allow the user to provide magic numbers to fix the cases it got it wrong. The simplest way to confuse it is with patterns like (add s32, complexpattern1) and (add s32, complexpattern2). These have the same Complexity score but in truth, each one has a (possibly overlapping) range of complexity depending on what C++ code is inside the complexpattern's and which path through that C++ is dynamically taken. If it matches nothing then the score should be 0 but if it matches a dozen nodes it should be 12 (or possibly higher). We don't know which until we try to match that specific pattern. Correctly figuring out an order in the presence of complexpatterns is impossible. Similarly, it's also possible to confuse it with patterns that differ but overlap and add up to the same complexity due to quirks of the scoring system.

Declaring Combine Passes

Combiner passes are defined by subclassing GICombiner like so:
    def CommonPreLegalizerCombiner: GICombiner<
      "CommonGenPreLegalizerCombiner", [all_combines]> {
      let DisableRuleOption = "common-prelegalizercombiner-disable-rule";
      let Modifiers = [(disable_rule copy_prop)]
    }

This causes tablegen to create a class named 'CommonPreLegalizerCombiner' which you can use to perform combines. This combiner contains all the combines mentioned in the previous section because it includes the 'all_combines' group. However, it disables the copy_prop rule to prevent it from attempting to match. I'll discuss that a bit more below. It also generates a command-line option for asserts builds only which can be used to further disable rules at run-time which will be useful for tracking down bugs or for testing a rule in isolation. I'm hoping that one day tools like bugpoint will be able to search through the individual rules within a combine pass when searching for a minimal reproducer.

The Modifiers field is intended to allow targets to modify an existing ruleset (particularly a target independent one) with additional target specific quirks. For example, one particular rule might be doing more harm than good and should be disabled. Or maybe only a subset of the things it would normally match are safe in which case an extra predicate should be tested. Being able to make minor edits to the ruleset without taking on the whole maintenance burden of the common code or causing code bloat by duplicating tables would be useful for targets that are generally similar to the targets within LLVM but have minor quirks.

Larger scale changes should take an alternate approach to modifiers. It's expected that even targets that are very different from the rest of the pack still have features in common. Such targets can declare their own combiner to replace the common one but still generally make use of the improvements made by the wider community. This is where GICombinerGroup will start to shine as such a target can declare a combiner like so:
    def MyTargetPreLegalizerCombiner: GICombiner<
      "MyTargetGenPreLegalizerCombiner",
      [common_extend_optimizations,
       common_extending_loads,
       // common_rsqrt_and_nr_iterations, // This target has a real sqrt operation
       mytarget_special_fma,
       common_fma,
       common_bswap_idioms,
       mytarget_bcd_arithmetic
      ]> {
    }
As LLVM improves on the common_* groups, the target benefits from those improvements automatically. However, it doesn't benefit from new groups being added to common_all_combines because it's no longer using that group so entirely new categories of combines added to it would not appear in the combiner. It would still be nice to find out that a new category has appeared though so that a decision can be made on it. To that end, I'm considering adding:
    let Verify = [(has_all_of_except all_combines, common_rsqrt_and_nr_iterations)];
which would cause a warning if something new appeared in the original group.

Conclusion

There is a lot of work that needs to be done to get all this working and some of it may have to change once it runs into the reality of implementation :-). However, we think that this will prove to be a very convenient and powerful syntax with some potential a variety of tools from profilers, to bug reducers, to correctness checking tools.

There is a patch at https://reviews.llvm.org/D54286 makes a start on it to try out the general feel of the syntax but currently lacks the core feature of generating match and apply code from the MIR. I'm going to be looking into that over the next few weeks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181108/0ea04a05/attachment-0001.html>


More information about the llvm-dev mailing list