[PATCH] D33758: [globalisel][tablegen] Partially fix compile-time regressions by converting matcher to state-machine(s)

Daniel Sanders via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 20 06:29:54 PDT 2017


dsanders added inline comments.


================
Comment at: include/llvm/CodeGen/GlobalISel/InstructionSelector.h:63-71
+  /// Record the specified instruction
+  /// - NewInsnID - Instruction ID to define
+  GIM_RecordInsn,
+  /// Follow the specified operand to the def and push it onto the stack
+  /// - InsnID - Instruction ID
+  /// - OpIdx - Operand index
+  GIM_PushInsnOpDef,
----------------
ab wrote:
> Are all three opcodes necessary?  It seems like we only need one, that defines an ID from an operand.
I think there's a potential argument for merging GIM_RecordInsn and GIM_PushInsnOpDef. They're currently separate because I wasn't completely convinced that all visited instructions needed to be recorded. I'm leaning towards all of them being recorded but I decided to come back to that once the compile-time regression was resolved.

GIM_PopInsn needs to be kept separate for the current DAG walking code. Without it we wouldn't be able to walk something like:
  (G_MUL (G_ADD a, b), (G_ADD c, d))
since once you descend into either G_ADD you wouldn't be able to visit the other. That said, I agree that tweaking GIM_PushInsnOpDef so that it takes a source instruction ID would achieve the same effect.

Are you ok with me working on this change as an optimisation after the compile-time regression fixes are committed? Currently all my work is bottlenecked on that.


================
Comment at: include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h:30
+    const PredicateBitset &AvailableFeatures) const {
+  const int64_t *Command = MatchTable;
+  while (true) {
----------------
ab wrote:
> Should this be unsigned?
> 
> Also, is int64_t necessary?  Seems like almost everything should fit in 16 bits.  X86 is close to 15k opcodes, but we can complain when we emit something that doesn't fit.  As for CheckInt, we can just encode the 64-bit value across 4 operands.
> Should this be unsigned?

It needs to be signed at the moment but the reason isn't a particularly good one: GIM_CheckInt takes a signed integer immediate straight from the table.

> Also, is int64_t necessary?

For most things no, but it's necessary for GIM_CheckInt at the moment. Narrowing the type is a size optimization that I decided to leave for after the compile-time issue was resolved since all the tablegen work is bottlenecked on it.


================
Comment at: utils/TableGen/GlobalISelEmitter.cpp:1984-1993
+  std::vector<LLTCodeGen> TypeObjects = {
+      LLT::scalar(8),      LLT::scalar(16),     LLT::scalar(32),
+      LLT::scalar(64),     LLT::scalar(80),     LLT::vector(8, 1),
+      LLT::vector(16, 1),  LLT::vector(32, 1),  LLT::vector(64, 1),
+      LLT::vector(8, 8),   LLT::vector(16, 8),  LLT::vector(32, 8),
+      LLT::vector(64, 8),  LLT::vector(4, 16),  LLT::vector(8, 16),
+      LLT::vector(16, 16), LLT::vector(32, 16), LLT::vector(2, 32),
----------------
ab wrote:
> Instead of a hardcoded list, MVTToLLT whatever can be converted in MVT::all_valuetypes() ?
This is another size optimization I left for later. I'd like this table to be created based on the calls to LLTCodeGen's constructor. This will make the table much smaller for most targets since many of these types are only needed on X86.


https://reviews.llvm.org/D33758





More information about the llvm-commits mailing list