[llvm] r325840 - [X86] Turn setne X, signedmin into setgt X, signedmin in LowerVSETCC to avoid an invert
Craig Topper via llvm-commits
llvm-commits at lists.llvm.org
Thu Feb 22 15:46:28 PST 2018
Author: ctopper
Date: Thu Feb 22 15:46:28 2018
New Revision: 325840
URL: http://llvm.org/viewvc/llvm-project?rev=325840&view=rev
Log:
[X86] Turn setne X, signedmin into setgt X, signedmin in LowerVSETCC to avoid an invert
This will fix one of the regressions from D42948.
Differential Revision: https://reviews.llvm.org/D43531
Modified:
llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=325840&r1=325839&r2=325840&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Thu Feb 22 15:46:28 2018
@@ -18099,6 +18099,15 @@ static SDValue LowerVSETCC(SDValue Op, c
}
}
+ // If this is a SETNE against the signed minimum value, change it to SETGT.
+ // Otherwise we use PCMPEQ+invert.
+ APInt ConstValue;
+ if (Cond == ISD::SETNE &&
+ ISD::isConstantSplatVector(Op1.getNode(), ConstValue),
+ ConstValue.isMinSignedValue()) {
+ Cond = ISD::SETGT;
+ }
+
// If both operands are known non-negative, then an unsigned compare is the
// same as a signed compare and there's no need to flip signbits.
// TODO: We could check for more general simplifications here since we're
Modified: llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll?rev=325840&r1=325839&r2=325840&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll (original)
+++ llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll Thu Feb 22 15:46:28 2018
@@ -334,3 +334,14 @@ define <4 x i32> @uge_smin(<4 x i32> %x)
ret <4 x i32> %r
}
+; Make sure we can efficiently handle ne smin by turning into sgt.
+define <4 x i32> @ne_smin(<4 x i32> %x) {
+; CHECK-LABEL: ne_smin:
+; CHECK: # %bb.0:
+; CHECK-NEXT: pcmpgtd {{.*}}(%rip), %xmm0
+; CHECK-NEXT: retq
+ %cmp = icmp ne <4 x i32> %x, <i32 -2147483648, i32 -2147483648, i32 -2147483648, i32 -2147483648>
+ %r = sext <4 x i1> %cmp to <4 x i32>
+ ret <4 x i32> %r
+}
+
More information about the llvm-commits
mailing list