[PATCH] D35635: Optimize {s,u}{add,sub}.with.overflow on ARM

Joel Galenson via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Oct 20 15:10:46 PDT 2017


jgalenson added a comment.

> I mean you could pattern match after the uaddo/saddo is transformed.  For unsigned add, you get something like `ARMISD::BRCOND(ARMISD::CMPZ(ARMISD::ADDE(0, 0, N), 1))`, which you can transform by eliminating the CMPZ+ADDE.  The pattern for signed add is slightly different, but similar.

Ah yes.  I considered that, but as you said, it would have more complicated matching and it would be more brittle if the saddo lowering changes (like it just did), so I preferred doing it this way.



================
Comment at: lib/Target/ARM/ARMISelLowering.cpp:4582
+    assert((CC == ISD::SETEQ || CC == ISD::SETNE) &&
+           "Unexpected condition code.");
+    // Only lower legal XALUO ops.
----------------
efriedma wrote:
> jgalenson wrote:
> > efriedma wrote:
> > > Is this assert actually guaranteed somehow?  I mean, it should be possible to transform any relevant condition to an equivalent SETEQ or SETNE, but I don't see any code to actually ensure this.
> > Good point.  I actually copied that part of the code over from the AArch64 backend.  I would guess that the assumption is that you should only be checking if the overflow bit is set or unset, hence EQ or NE, but I could imagine something transforming those into other operations.  I changed it to be part of the condition to avoid the problem.  Do you think it's worth doing something similar in the AArch64 backend?
> Yes.
Okay, I'll work on submitting a patch there (it should be NFC).


https://reviews.llvm.org/D35635





More information about the llvm-commits mailing list