[llvm] r325845 - [X86] Turn setne X, signedmax into setgt signedmax, X in LowerVSETCC to avoid an invert

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Thu Feb 22 16:21:39 PST 2018


Author: ctopper
Date: Thu Feb 22 16:21:39 2018
New Revision: 325845

URL: http://llvm.org/viewvc/llvm-project?rev=325845&view=rev
Log:
[X86] Turn setne X, signedmax into setgt signedmax, X in LowerVSETCC to avoid an invert

We won't be able to fold the constant pool load, but its still better than materialing ones and xoring for the invert if we used PCMPEQ.

This will fix another regression from D42948.

Modified:
    llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
    llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll

Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=325845&r1=325844&r2=325845&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Thu Feb 22 16:21:39 2018
@@ -18100,12 +18100,16 @@ static SDValue LowerVSETCC(SDValue Op, c
   }
 
   // If this is a SETNE against the signed minimum value, change it to SETGT.
+  // If this is a SETNE against the signed maximum value, change it to SETLT 
+  // which will be swapped to SETGT.
   // Otherwise we use PCMPEQ+invert.
   APInt ConstValue;
   if (Cond == ISD::SETNE &&
-      ISD::isConstantSplatVector(Op1.getNode(), ConstValue),
-      ConstValue.isMinSignedValue()) {
-    Cond = ISD::SETGT;
+      ISD::isConstantSplatVector(Op1.getNode(), ConstValue)) {
+    if (ConstValue.isMinSignedValue())
+      Cond = ISD::SETGT;
+    else if (ConstValue.isMaxSignedValue())
+      Cond = ISD::SETLT;
   }
 
   // If both operands are known non-negative, then an unsigned compare is the

Modified: llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll?rev=325845&r1=325844&r2=325845&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll (original)
+++ llvm/trunk/test/CodeGen/X86/vector-compare-simplify.ll Thu Feb 22 16:21:39 2018
@@ -345,3 +345,18 @@ define <4 x i32> @ne_smin(<4 x i32> %x)
   ret <4 x i32> %r
 }
 
+; Make sure we can efficiently handle ne smax by turning into sgt. We can't fold
+; the constant pool load, but the alternative is a cmpeq+invert which is 3 instructions.
+; The PCMPGT version is two instructions given sufficient register allocation freedom
+; to avoid the last mov to %xmm0 seen here.
+define <4 x i32> @ne_smax(<4 x i32> %x) {
+; CHECK-LABEL: ne_smax:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    movdqa {{.*#+}} xmm1 = [2147483647,2147483647,2147483647,2147483647]
+; CHECK-NEXT:    pcmpgtd %xmm0, %xmm1
+; CHECK-NEXT:    movdqa %xmm1, %xmm0
+; CHECK-NEXT:    retq
+  %cmp = icmp ne <4 x i32> %x, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
+  %r = sext <4 x i1> %cmp to <4 x i32>
+  ret <4 x i32> %r
+}




More information about the llvm-commits mailing list