[PATCH] TableGen: allow non-leaf ComplexPatterns
t.p.northover at gmail.com
Thu May 15 14:39:39 PDT 2014
> Your patch seems to be doing the right thing. However, I still don't understand the
> implication of your change on the complex patterns, SelectADDRriU6_[0-2],
> defined in HexagonISelDAGToDAG.cpp. Can you please explain it a little?
Sure, suppose someone changed (say) SelectADDRriU2_0(SDValue Addr,
SDValue &Base, SDValue &Offset) so that it examined Addr, looking for
something like (add IntRegs:$base, imm:$offset) and set the Base &
Offset references appropriately.
At the moment what would happen is that a (MemOPw_ADDr_V4 Addr, 0,
whatever) node would be produced. Addr would then go through a second
round of selection (even though SelectADDRriU2_0 had already dealt
with it), leading to multiple instructions and assembly looking like:
r0 = Base + Offset
mem(r0+#0) += whatever
(this won't actually be seen now, because you have a *real* pattern
that looks for "(add IntRegs:$base, extPred:$offset)" and has a higher
complexity so it gets tried first, but imagine some other cunning ruse
by SelectADDRriU2_0 to simplify things).
What I think you'd want to happen instead is that (MemOpw_ADDr_V4
Base, Offset, whatever) is produced and then selection carried on from
Base & Offset if needed. This would give code like:
mem(Base+Offset) += whatever
Interpreting your question completely differently (which may be
helpful for any out-of-tree targets you're considering): if XYZ is a
ComplexPattern and you only use it where both the input and the output
match up to that monolithic entity completely then you should see no
difference. That is, both of these should be unchanged by my work:
def : Pat<(whatever XYZ:$a, ...), (INST XYZ:$a)>;
def : Instruction<(ins XYZ:$a, ...), [(set ..., (whatever XYZ:$a))]>;
XYZ is used in both the input and the output, so everything matches up
just as it did before.
I hope this helps, but please do ask for any clarification if I've not
been clear enough.
More information about the llvm-commits