[PATCH] D35635: Optimize {s,u}{add,sub}.with.overflow on ARM
Eli Friedman via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Fri Oct 20 15:04:15 PDT 2017
efriedma added a comment.
> I'm not sure what you mean. After the ARMISD::BRCOND is created, only a few unrelated instructions are matched before the saddo is transformed. So without the reordering we've discussed, I don't see how this would help. I'm probably missing something, but it sounds like it might well not be worth doing.
I mean you could pattern match after the uaddo/saddo is transformed. For unsigned add, you get something like `ARMISD::BRCOND(ARMISD::CMPZ(ARMISD::ADDE(0, 0, N), 1))`, which you can transform by eliminating the CMPZ+ADDE. The pattern for signed add is slightly different, but similar.
================
Comment at: lib/Target/ARM/ARMISelLowering.cpp:4582
+ assert((CC == ISD::SETEQ || CC == ISD::SETNE) &&
+ "Unexpected condition code.");
+ // Only lower legal XALUO ops.
----------------
jgalenson wrote:
> efriedma wrote:
> > Is this assert actually guaranteed somehow? I mean, it should be possible to transform any relevant condition to an equivalent SETEQ or SETNE, but I don't see any code to actually ensure this.
> Good point. I actually copied that part of the code over from the AArch64 backend. I would guess that the assumption is that you should only be checking if the overflow bit is set or unset, hence EQ or NE, but I could imagine something transforming those into other operations. I changed it to be part of the condition to avoid the problem. Do you think it's worth doing something similar in the AArch64 backend?
Yes.
https://reviews.llvm.org/D35635
More information about the llvm-commits
mailing list