[PATCH] D38160: [AArch64] Improve codegen for inverted overflow checking intrinsics

Kristof Beyls via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Oct 5 01:51:06 PDT 2017


kristof.beyls added inline comments.


================
Comment at: lib/Target/AArch64/AArch64ISelLowering.cpp:1980-2012
+  ConstantSDNode *COp1 = dyn_cast<ConstantSDNode>(Other);
+  unsigned Opc = Sel.getOpcode();
+  // If the operand is an overflow checking operation, invert the condition
+  // code and kill the xor(op, 1).
+  if (Sel.getResNo() == 1 &&
+      (Opc == ISD::SADDO || Opc == ISD::UADDO || Opc == ISD::SSUBO ||
+       Opc == ISD::USUBO || Opc == ISD::SMULO || Opc == ISD::UMULO) &&
----------------
Hi Amara,

I'm only vaguely familiar with this area of the code base.
My understanding is that you're aiming to have one more pattern apply involving an xor node.
I think it'd be nice to write out the pattern in a comment just like is available for the pattern matched in the existing code in LowerXOR.

Apart from that, with my unfamiliarity of this code base, I wonder why this pattern matching optimization is done during lowering.
Are there good reasons this optimization isn't done elsewhere, e.g. described by a tablegen pattern or during DAGCombine (e.g. in performXorCombine in this same source file)?
Apologies if the answer is blatantly obvious to people more experienced in this area.

Thanks,

Kristof


Repository:
  rL LLVM

https://reviews.llvm.org/D38160





More information about the llvm-commits mailing list