[llvm] 913d7a1 - [X86][SSE2] Use smarter instruction patterns for lowering UMIN/UMAX with v8i16.
Simon Pilgrim via llvm-commits
llvm-commits at lists.llvm.org
Sun Oct 11 03:21:44 PDT 2020
Author: Simon Pilgrim
Date: 2020-10-11T11:21:23+01:00
New Revision: 913d7a110efaad06888523d17e03b2833fc83ed2
URL: https://github.com/llvm/llvm-project/commit/913d7a110efaad06888523d17e03b2833fc83ed2
DIFF: https://github.com/llvm/llvm-project/commit/913d7a110efaad06888523d17e03b2833fc83ed2.diff
LOG: [X86][SSE2] Use smarter instruction patterns for lowering UMIN/UMAX with v8i16.
This is my first LLVM patch, so please tell me if there are any process issues.
The main observation for this patch is that we can lower UMIN/UMAX with v8i16 by using unsigned saturated subtractions in a clever way. Previously this operation was lowered by turning the signbit of both inputs and the output which turns the unsigned minimum/maximum into a signed one.
We could use this trick in reverse for lowering SMIN/SMAX with v16i8 instead. In terms of latency/throughput this is the needs one large move instruction. It's just that the sign bit turning has an increased chance of being optimized further. This is particularly apparent in the "reduce" test cases. However due to the slight regression in the single use case, this patch no longer proposes this.
Unfortunately this argument also applies in reverse to the new lowering of UMIN/UMAX with v8i16 which regresses the "horizontal-reduce-umax", "horizontal-reduce-umin", "vector-reduce-umin" and "vector-reduce-umax" test cases a bit with this patch. Maybe some extra casework would be possible to avoid this. However independent of that I believe that the benefits in the common case of just 1 to 3 chained min/max instructions outweighs the downsides in that specific case.
Patch By: @TomHender (Tom Hender) ActuallyaDeviloper
Differential Revision: https://reviews.llvm.org/D87236
Added:
Modified:
llvm/lib/Target/X86/X86ISelLowering.cpp
llvm/lib/Target/X86/X86TargetTransformInfo.cpp
llvm/test/Analysis/CostModel/X86/arith-uminmax.ll
llvm/test/CodeGen/X86/horizontal-reduce-umax.ll
llvm/test/CodeGen/X86/horizontal-reduce-umin.ll
llvm/test/CodeGen/X86/machine-combiner-int-vec.ll
llvm/test/CodeGen/X86/masked_store_trunc_usat.ll
llvm/test/CodeGen/X86/midpoint-int-vec-128.ll
llvm/test/CodeGen/X86/sat-add.ll
llvm/test/CodeGen/X86/umax.ll
llvm/test/CodeGen/X86/umin.ll
llvm/test/CodeGen/X86/vec_minmax_uint.ll
llvm/test/CodeGen/X86/vector-reduce-umax.ll
llvm/test/CodeGen/X86/vector-reduce-umin.ll
llvm/test/CodeGen/X86/vector-trunc-usat.ll
llvm/test/CodeGen/X86/vselect-minmax.ll
Removed:
################################################################################
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 4fed4448823f..be77233a60f8 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -26945,17 +26945,15 @@ static SDValue LowerMINMAX(SDValue Op, SelectionDAG &DAG) {
SDValue N0 = Op.getOperand(0);
SDValue N1 = Op.getOperand(1);
- // For pre-SSE41, we can perform UMIN/UMAX v8i16 by flipping the signbit,
- // using the SMIN/SMAX instructions and flipping the signbit back.
+ // For pre-SSE41, we can perform UMIN/UMAX v8i16 by using psubusw.
if (VT == MVT::v8i16) {
assert((Opcode == ISD::UMIN || Opcode == ISD::UMAX) &&
"Unexpected MIN/MAX opcode");
- SDValue Sign = DAG.getConstant(APInt::getSignedMinValue(16), DL, VT);
- N0 = DAG.getNode(ISD::XOR, DL, VT, N0, Sign);
- N1 = DAG.getNode(ISD::XOR, DL, VT, N1, Sign);
- Opcode = (Opcode == ISD::UMIN ? ISD::SMIN : ISD::SMAX);
- SDValue Result = DAG.getNode(Opcode, DL, VT, N0, N1);
- return DAG.getNode(ISD::XOR, DL, VT, Result, Sign);
+ if (Opcode == ISD::UMIN)
+ return DAG.getNode(ISD::SUB, DL, VT, N0,
+ DAG.getNode(ISD::USUBSAT, DL, VT, N0, N1));
+ return DAG.getNode(ISD::ADD, DL, VT,
+ DAG.getNode(ISD::USUBSAT, DL, VT, N1, N0), N0);
}
// Else, expand to a compare/select.
diff --git a/llvm/lib/Target/X86/X86TargetTransformInfo.cpp b/llvm/lib/Target/X86/X86TargetTransformInfo.cpp
index de5ac52b5469..8006f7787565 100644
--- a/llvm/lib/Target/X86/X86TargetTransformInfo.cpp
+++ b/llvm/lib/Target/X86/X86TargetTransformInfo.cpp
@@ -2619,7 +2619,9 @@ int X86TTIImpl::getTypeBasedIntrinsicInstrCost(
{ ISD::SSUBSAT, MVT::v16i8, 1 },
{ ISD::UADDSAT, MVT::v8i16, 1 },
{ ISD::UADDSAT, MVT::v16i8, 1 },
+ { ISD::UMAX, MVT::v8i16, 2 },
{ ISD::UMAX, MVT::v16i8, 1 },
+ { ISD::UMIN, MVT::v8i16, 2 },
{ ISD::UMIN, MVT::v16i8, 1 },
{ ISD::USUBSAT, MVT::v8i16, 1 },
{ ISD::USUBSAT, MVT::v16i8, 1 },
diff --git a/llvm/test/Analysis/CostModel/X86/arith-uminmax.ll b/llvm/test/Analysis/CostModel/X86/arith-uminmax.ll
index 8b6f6f20e1b4..080084a3cdd1 100644
--- a/llvm/test/Analysis/CostModel/X86/arith-uminmax.ll
+++ b/llvm/test/Analysis/CostModel/X86/arith-uminmax.ll
@@ -39,9 +39,9 @@ define i32 @umax(i32 %arg) {
; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V8I32 = call <8 x i32> @llvm.umax.v8i32(<8 x i32> undef, <8 x i32> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V16I32 = call <16 x i32> @llvm.umax.v16i32(<16 x i32> undef, <16 x i32> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I16 = call i16 @llvm.umax.i16(i16 undef, i16 undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V8I16 = call <8 x i16> @llvm.umax.v8i16(<8 x i16> undef, <8 x i16> undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V16I16 = call <16 x i16> @llvm.umax.v16i16(<16 x i16> undef, <16 x i16> undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V32I16 = call <32 x i16> @llvm.umax.v32i16(<32 x i16> undef, <32 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V8I16 = call <8 x i16> @llvm.umax.v8i16(<8 x i16> undef, <8 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V16I16 = call <16 x i16> @llvm.umax.v16i16(<16 x i16> undef, <16 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V32I16 = call <32 x i16> @llvm.umax.v32i16(<32 x i16> undef, <32 x i16> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I8 = call i8 @llvm.umax.i8(i8 undef, i8 undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V16I8 = call <16 x i8> @llvm.umax.v16i8(<16 x i8> undef, <16 x i8> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V32I8 = call <32 x i8> @llvm.umax.v32i8(<32 x i8> undef, <32 x i8> undef)
@@ -58,9 +58,9 @@ define i32 @umax(i32 %arg) {
; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V8I32 = call <8 x i32> @llvm.umax.v8i32(<8 x i32> undef, <8 x i32> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V16I32 = call <16 x i32> @llvm.umax.v16i32(<16 x i32> undef, <16 x i32> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I16 = call i16 @llvm.umax.i16(i16 undef, i16 undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V8I16 = call <8 x i16> @llvm.umax.v8i16(<8 x i16> undef, <8 x i16> undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V16I16 = call <16 x i16> @llvm.umax.v16i16(<16 x i16> undef, <16 x i16> undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V32I16 = call <32 x i16> @llvm.umax.v32i16(<32 x i16> undef, <32 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V8I16 = call <8 x i16> @llvm.umax.v8i16(<8 x i16> undef, <8 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V16I16 = call <16 x i16> @llvm.umax.v16i16(<16 x i16> undef, <16 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V32I16 = call <32 x i16> @llvm.umax.v32i16(<32 x i16> undef, <32 x i16> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I8 = call i8 @llvm.umax.i8(i8 undef, i8 undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V16I8 = call <16 x i8> @llvm.umax.v16i8(<16 x i8> undef, <16 x i8> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V32I8 = call <32 x i8> @llvm.umax.v32i8(<32 x i8> undef, <32 x i8> undef)
@@ -235,9 +235,9 @@ define i32 @umin(i32 %arg) {
; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V8I32 = call <8 x i32> @llvm.umin.v8i32(<8 x i32> undef, <8 x i32> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V16I32 = call <16 x i32> @llvm.umin.v16i32(<16 x i32> undef, <16 x i32> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I16 = call i16 @llvm.umin.i16(i16 undef, i16 undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V8I16 = call <8 x i16> @llvm.umin.v8i16(<8 x i16> undef, <8 x i16> undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V16I16 = call <16 x i16> @llvm.umin.v16i16(<16 x i16> undef, <16 x i16> undef)
-; SSE2-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V32I16 = call <32 x i16> @llvm.umin.v32i16(<32 x i16> undef, <32 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V8I16 = call <8 x i16> @llvm.umin.v8i16(<8 x i16> undef, <8 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V16I16 = call <16 x i16> @llvm.umin.v16i16(<16 x i16> undef, <16 x i16> undef)
+; SSE2-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V32I16 = call <32 x i16> @llvm.umin.v32i16(<32 x i16> undef, <32 x i16> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I8 = call i8 @llvm.umin.i8(i8 undef, i8 undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V16I8 = call <16 x i8> @llvm.umin.v16i8(<16 x i8> undef, <16 x i8> undef)
; SSE2-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V32I8 = call <32 x i8> @llvm.umin.v32i8(<32 x i8> undef, <32 x i8> undef)
@@ -254,9 +254,9 @@ define i32 @umin(i32 %arg) {
; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V8I32 = call <8 x i32> @llvm.umin.v8i32(<8 x i32> undef, <8 x i32> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V16I32 = call <16 x i32> @llvm.umin.v16i32(<16 x i32> undef, <16 x i32> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I16 = call i16 @llvm.umin.i16(i16 undef, i16 undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V8I16 = call <8 x i16> @llvm.umin.v8i16(<8 x i16> undef, <8 x i16> undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V16I16 = call <16 x i16> @llvm.umin.v16i16(<16 x i16> undef, <16 x i16> undef)
-; SSSE3-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %V32I16 = call <32 x i16> @llvm.umin.v32i16(<32 x i16> undef, <32 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V8I16 = call <8 x i16> @llvm.umin.v8i16(<8 x i16> undef, <8 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %V16I16 = call <16 x i16> @llvm.umin.v16i16(<16 x i16> undef, <16 x i16> undef)
+; SSSE3-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %V32I16 = call <32 x i16> @llvm.umin.v32i16(<32 x i16> undef, <32 x i16> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %I8 = call i8 @llvm.umin.i8(i8 undef, i8 undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %V16I8 = call <16 x i8> @llvm.umin.v16i8(<16 x i8> undef, <16 x i8> undef)
; SSSE3-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %V32I8 = call <32 x i8> @llvm.umin.v32i8(<32 x i8> undef, <32 x i8> undef)
diff --git a/llvm/test/CodeGen/X86/horizontal-reduce-umax.ll b/llvm/test/CodeGen/X86/horizontal-reduce-umax.ll
index 5faf06199778..401974d61c37 100644
--- a/llvm/test/CodeGen/X86/horizontal-reduce-umax.ll
+++ b/llvm/test/CodeGen/X86/horizontal-reduce-umax.ll
@@ -239,17 +239,16 @@ define i16 @test_reduce_v8i16(<8 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -276,17 +275,16 @@ define i16 @test_reduce_v8i16(<8 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -826,19 +824,19 @@ define i32 @test_reduce_v8i32(<8 x i32> %a0) {
define i16 @test_reduce_v16i16(<16 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v16i16:
; X86-SSE2: ## %bb.0:
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
-; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,2,3]
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
+; X86-SSE2-NEXT: movdqa %xmm1, %xmm0
+; X86-SSE2-NEXT: psrld $16, %xmm0
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -881,19 +879,19 @@ define i16 @test_reduce_v16i16(<16 x i16> %a0) {
;
; X64-SSE2-LABEL: test_reduce_v16i16:
; X64-SSE2: ## %bb.0:
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
-; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,2,3]
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
+; X64-SSE2-NEXT: movdqa %xmm1, %xmm0
+; X64-SSE2-NEXT: psrld $16, %xmm0
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -1644,23 +1642,23 @@ define i32 @test_reduce_v16i32(<16 x i32> %a0) {
define i16 @test_reduce_v32i16(<32 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v32i16:
; X86-SSE2: ## %bb.0:
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm4, %xmm3
-; X86-SSE2-NEXT: pxor %xmm4, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; X86-SSE2-NEXT: pxor %xmm4, %xmm2
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm2
-; X86-SSE2-NEXT: pxor %xmm4, %xmm0
-; X86-SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
-; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm2
+; X86-SSE2-NEXT: paddw %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm3
+; X86-SSE2-NEXT: paddw %xmm1, %xmm3
+; X86-SSE2-NEXT: psubusw %xmm2, %xmm3
+; X86-SSE2-NEXT: paddw %xmm2, %xmm3
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm3[2,3,2,3]
+; X86-SSE2-NEXT: psubusw %xmm3, %xmm0
+; X86-SSE2-NEXT: paddw %xmm3, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
+; X86-SSE2-NEXT: movdqa %xmm1, %xmm0
+; X86-SSE2-NEXT: psrld $16, %xmm0
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -1709,23 +1707,23 @@ define i16 @test_reduce_v32i16(<32 x i16> %a0) {
;
; X64-SSE2-LABEL: test_reduce_v32i16:
; X64-SSE2: ## %bb.0:
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm4, %xmm3
-; X64-SSE2-NEXT: pxor %xmm4, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; X64-SSE2-NEXT: pxor %xmm4, %xmm2
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm2
-; X64-SSE2-NEXT: pxor %xmm4, %xmm0
-; X64-SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
-; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm2
+; X64-SSE2-NEXT: paddw %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm3
+; X64-SSE2-NEXT: paddw %xmm1, %xmm3
+; X64-SSE2-NEXT: psubusw %xmm2, %xmm3
+; X64-SSE2-NEXT: paddw %xmm2, %xmm3
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm3[2,3,2,3]
+; X64-SSE2-NEXT: psubusw %xmm3, %xmm0
+; X64-SSE2-NEXT: paddw %xmm3, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
+; X64-SSE2-NEXT: movdqa %xmm1, %xmm0
+; X64-SSE2-NEXT: psrld $16, %xmm0
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -1988,17 +1986,16 @@ define i16 @test_reduce_v16i16_v8i16(<16 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v16i16_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -2026,17 +2023,16 @@ define i16 @test_reduce_v16i16_v8i16(<16 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v16i16_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -2098,17 +2094,16 @@ define i16 @test_reduce_v32i16_v8i16(<32 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v32i16_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X86-SSE2-NEXT: paddw %xmm1, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X86-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X86-SSE2-NEXT: paddw %xmm0, %xmm1
; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -2136,17 +2131,16 @@ define i16 @test_reduce_v32i16_v8i16(<32 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v32i16_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm0
+; X64-SSE2-NEXT: paddw %xmm1, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; X64-SSE2-NEXT: psubusw %xmm0, %xmm1
+; X64-SSE2-NEXT: paddw %xmm0, %xmm1
; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
diff --git a/llvm/test/CodeGen/X86/horizontal-reduce-umin.ll b/llvm/test/CodeGen/X86/horizontal-reduce-umin.ll
index cd048b8d7659..074005878c78 100644
--- a/llvm/test/CodeGen/X86/horizontal-reduce-umin.ll
+++ b/llvm/test/CodeGen/X86/horizontal-reduce-umin.ll
@@ -241,17 +241,19 @@ define i16 @test_reduce_v8i16(<8 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -272,17 +274,19 @@ define i16 @test_reduce_v8i16(<8 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -766,19 +770,23 @@ define i32 @test_reduce_v8i32(<8 x i32> %a0) {
define i16 @test_reduce_v16i16(<16 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v16i16:
; X86-SSE2: ## %bb.0:
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -812,19 +820,23 @@ define i16 @test_reduce_v16i16(<16 x i16> %a0) {
;
; X64-SSE2-LABEL: test_reduce_v16i16:
; X64-SSE2: ## %bb.0:
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -1548,23 +1560,29 @@ define i32 @test_reduce_v16i32(<16 x i32> %a0) {
define i16 @test_reduce_v32i16(<32 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v32i16:
; X86-SSE2: ## %bb.0:
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm4, %xmm3
-; X86-SSE2-NEXT: pxor %xmm4, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm3, %xmm1
-; X86-SSE2-NEXT: pxor %xmm4, %xmm2
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm2
-; X86-SSE2-NEXT: pxor %xmm4, %xmm0
-; X86-SSE2-NEXT: pminsw %xmm2, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm1, %xmm4
+; X86-SSE2-NEXT: psubusw %xmm3, %xmm4
+; X86-SSE2-NEXT: psubw %xmm4, %xmm1
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm3
+; X86-SSE2-NEXT: psubusw %xmm2, %xmm3
+; X86-SSE2-NEXT: psubw %xmm3, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -1604,23 +1622,29 @@ define i16 @test_reduce_v32i16(<32 x i16> %a0) {
;
; X64-SSE2-LABEL: test_reduce_v32i16:
; X64-SSE2: ## %bb.0:
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm4, %xmm3
-; X64-SSE2-NEXT: pxor %xmm4, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm3, %xmm1
-; X64-SSE2-NEXT: pxor %xmm4, %xmm2
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm2
-; X64-SSE2-NEXT: pxor %xmm4, %xmm0
-; X64-SSE2-NEXT: pminsw %xmm2, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm1, %xmm4
+; X64-SSE2-NEXT: psubusw %xmm3, %xmm4
+; X64-SSE2-NEXT: psubw %xmm4, %xmm1
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm3
+; X64-SSE2-NEXT: psubusw %xmm2, %xmm3
+; X64-SSE2-NEXT: psubw %xmm3, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -1852,17 +1876,19 @@ define i16 @test_reduce_v16i16_v8i16(<16 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v16i16_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -1884,17 +1910,19 @@ define i16 @test_reduce_v16i16_v8i16(<16 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v16i16_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
@@ -1929,17 +1957,19 @@ define i16 @test_reduce_v32i16_v8i16(<32 x i16> %a0) {
; X86-SSE2-LABEL: test_reduce_v32i16_v8i16:
; X86-SSE2: ## %bb.0:
; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X86-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X86-SSE2-NEXT: pxor %xmm2, %xmm0
-; X86-SSE2-NEXT: pxor %xmm2, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X86-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
; X86-SSE2-NEXT: movdqa %xmm0, %xmm1
; X86-SSE2-NEXT: psrld $16, %xmm1
-; X86-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X86-SSE2-NEXT: movd %xmm1, %eax
-; X86-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X86-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X86-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X86-SSE2-NEXT: psubw %xmm2, %xmm0
+; X86-SSE2-NEXT: movd %xmm0, %eax
; X86-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X86-SSE2-NEXT: retl
;
@@ -1961,17 +1991,19 @@ define i16 @test_reduce_v32i16_v8i16(<32 x i16> %a0) {
; X64-SSE2-LABEL: test_reduce_v32i16_v8i16:
; X64-SSE2: ## %bb.0:
; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; X64-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; X64-SSE2-NEXT: pxor %xmm2, %xmm0
-; X64-SSE2-NEXT: pxor %xmm2, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; X64-SSE2-NEXT: pminsw %xmm1, %xmm0
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
; X64-SSE2-NEXT: movdqa %xmm0, %xmm1
; X64-SSE2-NEXT: psrld $16, %xmm1
-; X64-SSE2-NEXT: pminsw %xmm0, %xmm1
-; X64-SSE2-NEXT: movd %xmm1, %eax
-; X64-SSE2-NEXT: xorl $32768, %eax ## imm = 0x8000
+; X64-SSE2-NEXT: movdqa %xmm0, %xmm2
+; X64-SSE2-NEXT: psubusw %xmm1, %xmm2
+; X64-SSE2-NEXT: psubw %xmm2, %xmm0
+; X64-SSE2-NEXT: movd %xmm0, %eax
; X64-SSE2-NEXT: ## kill: def $ax killed $ax killed $eax
; X64-SSE2-NEXT: retq
;
diff --git a/llvm/test/CodeGen/X86/machine-combiner-int-vec.ll b/llvm/test/CodeGen/X86/machine-combiner-int-vec.ll
index 4e07b4abde4b..21846a673c6b 100644
--- a/llvm/test/CodeGen/X86/machine-combiner-int-vec.ll
+++ b/llvm/test/CodeGen/X86/machine-combiner-int-vec.ll
@@ -327,13 +327,10 @@ define <8 x i16> @reassociate_umax_v8i16(<8 x i16> %x0, <8 x i16> %x1, <8 x i16>
; SSE-LABEL: reassociate_umax_v8i16:
; SSE: # %bb.0:
; SSE-NEXT: paddw %xmm1, %xmm0
-; SSE-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm1, %xmm2
-; SSE-NEXT: pxor %xmm1, %xmm0
-; SSE-NEXT: pmaxsw %xmm2, %xmm0
-; SSE-NEXT: pxor %xmm1, %xmm3
-; SSE-NEXT: pmaxsw %xmm3, %xmm0
-; SSE-NEXT: pxor %xmm1, %xmm0
+; SSE-NEXT: psubusw %xmm2, %xmm0
+; SSE-NEXT: paddw %xmm2, %xmm0
+; SSE-NEXT: psubusw %xmm3, %xmm0
+; SSE-NEXT: paddw %xmm3, %xmm0
; SSE-NEXT: retq
;
; AVX-LABEL: reassociate_umax_v8i16:
@@ -626,13 +623,13 @@ define <8 x i16> @reassociate_umin_v8i16(<8 x i16> %x0, <8 x i16> %x1, <8 x i16>
; SSE-LABEL: reassociate_umin_v8i16:
; SSE: # %bb.0:
; SSE-NEXT: paddw %xmm1, %xmm0
-; SSE-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm1, %xmm2
-; SSE-NEXT: pxor %xmm1, %xmm0
-; SSE-NEXT: pminsw %xmm2, %xmm0
-; SSE-NEXT: pxor %xmm1, %xmm3
-; SSE-NEXT: pminsw %xmm3, %xmm0
-; SSE-NEXT: pxor %xmm1, %xmm0
+; SSE-NEXT: movdqa %xmm2, %xmm1
+; SSE-NEXT: psubusw %xmm0, %xmm1
+; SSE-NEXT: psubw %xmm1, %xmm2
+; SSE-NEXT: movdqa %xmm3, %xmm0
+; SSE-NEXT: psubusw %xmm2, %xmm0
+; SSE-NEXT: psubw %xmm0, %xmm3
+; SSE-NEXT: movdqa %xmm3, %xmm0
; SSE-NEXT: retq
;
; AVX-LABEL: reassociate_umin_v8i16:
@@ -930,19 +927,14 @@ define <16 x i16> @reassociate_umax_v16i16(<16 x i16> %x0, <16 x i16> %x1, <16 x
; SSE: # %bb.0:
; SSE-NEXT: paddw %xmm2, %xmm0
; SSE-NEXT: paddw %xmm3, %xmm1
-; SSE-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm2, %xmm5
-; SSE-NEXT: pxor %xmm2, %xmm1
-; SSE-NEXT: pmaxsw %xmm5, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm4
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pmaxsw %xmm4, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm6
-; SSE-NEXT: pmaxsw %xmm6, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm7
-; SSE-NEXT: pmaxsw %xmm7, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm1
+; SSE-NEXT: psubusw %xmm5, %xmm1
+; SSE-NEXT: paddw %xmm5, %xmm1
+; SSE-NEXT: psubusw %xmm4, %xmm0
+; SSE-NEXT: paddw %xmm4, %xmm0
+; SSE-NEXT: psubusw %xmm6, %xmm0
+; SSE-NEXT: paddw %xmm6, %xmm0
+; SSE-NEXT: psubusw %xmm7, %xmm1
+; SSE-NEXT: paddw %xmm7, %xmm1
; SSE-NEXT: retq
;
; AVX-LABEL: reassociate_umax_v16i16:
@@ -1343,19 +1335,20 @@ define <16 x i16> @reassociate_umin_v16i16(<16 x i16> %x0, <16 x i16> %x1, <16 x
; SSE: # %bb.0:
; SSE-NEXT: paddw %xmm2, %xmm0
; SSE-NEXT: paddw %xmm3, %xmm1
-; SSE-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm2, %xmm5
-; SSE-NEXT: pxor %xmm2, %xmm1
-; SSE-NEXT: pminsw %xmm5, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm4
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pminsw %xmm4, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm6
-; SSE-NEXT: pminsw %xmm6, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm7
-; SSE-NEXT: pminsw %xmm7, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm1
+; SSE-NEXT: movdqa %xmm5, %xmm2
+; SSE-NEXT: psubusw %xmm1, %xmm2
+; SSE-NEXT: psubw %xmm2, %xmm5
+; SSE-NEXT: movdqa %xmm4, %xmm1
+; SSE-NEXT: psubusw %xmm0, %xmm1
+; SSE-NEXT: psubw %xmm1, %xmm4
+; SSE-NEXT: movdqa %xmm6, %xmm0
+; SSE-NEXT: psubusw %xmm4, %xmm0
+; SSE-NEXT: psubw %xmm0, %xmm6
+; SSE-NEXT: movdqa %xmm7, %xmm0
+; SSE-NEXT: psubusw %xmm5, %xmm0
+; SSE-NEXT: psubw %xmm0, %xmm7
+; SSE-NEXT: movdqa %xmm6, %xmm0
+; SSE-NEXT: movdqa %xmm7, %xmm1
; SSE-NEXT: retq
;
; AVX-LABEL: reassociate_umin_v16i16:
@@ -1771,43 +1764,34 @@ define <64 x i8> @reassociate_umax_v64i8(<64 x i8> %x0, <64 x i8> %x1, <64 x i8>
define <32 x i16> @reassociate_umax_v32i16(<32 x i16> %x0, <32 x i16> %x1, <32 x i16> %x2, <32 x i16> %x3) {
; SSE-LABEL: reassociate_umax_v32i16:
; SSE: # %bb.0:
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm8
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm9
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm10
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm11
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm12
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm13
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm14
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm15
; SSE-NEXT: paddw %xmm4, %xmm0
; SSE-NEXT: paddw %xmm5, %xmm1
; SSE-NEXT: paddw %xmm6, %xmm2
; SSE-NEXT: paddw %xmm7, %xmm3
-; SSE-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm5
-; SSE-NEXT: pxor %xmm4, %xmm5
-; SSE-NEXT: pmaxsw %xmm3, %xmm5
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: pmaxsw %xmm2, %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: pmaxsw %xmm1, %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm0
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: pmaxsw %xmm0, %xmm1
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm0
-; SSE-NEXT: pxor %xmm4, %xmm0
-; SSE-NEXT: pmaxsw %xmm1, %xmm0
-; SSE-NEXT: pxor %xmm4, %xmm0
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: pmaxsw %xmm2, %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: pmaxsw %xmm3, %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: pmaxsw %xmm5, %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
+; SSE-NEXT: psubusw %xmm15, %xmm3
+; SSE-NEXT: paddw %xmm15, %xmm3
+; SSE-NEXT: psubusw %xmm14, %xmm2
+; SSE-NEXT: paddw %xmm14, %xmm2
+; SSE-NEXT: psubusw %xmm13, %xmm1
+; SSE-NEXT: paddw %xmm13, %xmm1
+; SSE-NEXT: psubusw %xmm12, %xmm0
+; SSE-NEXT: paddw %xmm12, %xmm0
+; SSE-NEXT: psubusw %xmm11, %xmm0
+; SSE-NEXT: paddw %xmm11, %xmm0
+; SSE-NEXT: psubusw %xmm10, %xmm1
+; SSE-NEXT: paddw %xmm10, %xmm1
+; SSE-NEXT: psubusw %xmm9, %xmm2
+; SSE-NEXT: paddw %xmm9, %xmm2
+; SSE-NEXT: psubusw %xmm8, %xmm3
+; SSE-NEXT: paddw %xmm8, %xmm3
; SSE-NEXT: retq
;
; AVX2-LABEL: reassociate_umax_v32i16:
@@ -2536,43 +2520,46 @@ define <64 x i8> @reassociate_umin_v64i8(<64 x i8> %x0, <64 x i8> %x1, <64 x i8>
define <32 x i16> @reassociate_umin_v32i16(<32 x i16> %x0, <32 x i16> %x1, <32 x i16> %x2, <32 x i16> %x3) {
; SSE-LABEL: reassociate_umin_v32i16:
; SSE: # %bb.0:
-; SSE-NEXT: paddw %xmm4, %xmm0
-; SSE-NEXT: paddw %xmm5, %xmm1
-; SSE-NEXT: paddw %xmm6, %xmm2
-; SSE-NEXT: paddw %xmm7, %xmm3
-; SSE-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm5
-; SSE-NEXT: pxor %xmm4, %xmm5
-; SSE-NEXT: pminsw %xmm3, %xmm5
-; SSE-NEXT: pxor %xmm4, %xmm2
+; SSE-NEXT: movdqa %xmm3, %xmm8
+; SSE-NEXT: movdqa %xmm2, %xmm9
+; SSE-NEXT: movdqa %xmm1, %xmm10
+; SSE-NEXT: movdqa %xmm0, %xmm11
; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: pminsw %xmm2, %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm1
; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: pminsw %xmm1, %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm0
; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: pminsw %xmm0, %xmm1
; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm0
-; SSE-NEXT: pxor %xmm4, %xmm0
-; SSE-NEXT: pminsw %xmm1, %xmm0
-; SSE-NEXT: pxor %xmm4, %xmm0
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: pminsw %xmm2, %xmm1
-; SSE-NEXT: pxor %xmm4, %xmm1
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: pminsw %xmm3, %xmm2
-; SSE-NEXT: pxor %xmm4, %xmm2
-; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
-; SSE-NEXT: pminsw %xmm5, %xmm3
-; SSE-NEXT: pxor %xmm4, %xmm3
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm14
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm15
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm13
+; SSE-NEXT: movdqa {{[0-9]+}}(%rsp), %xmm12
+; SSE-NEXT: paddw %xmm4, %xmm11
+; SSE-NEXT: paddw %xmm5, %xmm10
+; SSE-NEXT: paddw %xmm6, %xmm9
+; SSE-NEXT: paddw %xmm7, %xmm8
+; SSE-NEXT: movdqa %xmm12, %xmm4
+; SSE-NEXT: psubusw %xmm8, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm12
+; SSE-NEXT: movdqa %xmm13, %xmm4
+; SSE-NEXT: psubusw %xmm9, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm13
+; SSE-NEXT: movdqa %xmm15, %xmm4
+; SSE-NEXT: psubusw %xmm10, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm15
+; SSE-NEXT: movdqa %xmm14, %xmm4
+; SSE-NEXT: psubusw %xmm11, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm14
+; SSE-NEXT: movdqa %xmm0, %xmm4
+; SSE-NEXT: psubusw %xmm14, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm0
+; SSE-NEXT: movdqa %xmm1, %xmm4
+; SSE-NEXT: psubusw %xmm15, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm1
+; SSE-NEXT: movdqa %xmm2, %xmm4
+; SSE-NEXT: psubusw %xmm13, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm2
+; SSE-NEXT: movdqa %xmm3, %xmm4
+; SSE-NEXT: psubusw %xmm12, %xmm4
+; SSE-NEXT: psubw %xmm4, %xmm3
; SSE-NEXT: retq
;
; AVX2-LABEL: reassociate_umin_v32i16:
diff --git a/llvm/test/CodeGen/X86/masked_store_trunc_usat.ll b/llvm/test/CodeGen/X86/masked_store_trunc_usat.ll
index 876476bd8c57..973c77411645 100644
--- a/llvm/test/CodeGen/X86/masked_store_trunc_usat.ll
+++ b/llvm/test/CodeGen/X86/masked_store_trunc_usat.ll
@@ -5188,14 +5188,13 @@ define void @truncstore_v32i16_v32i8(<32 x i16> %x, <32 x i8>* %p, <32 x i8> %ma
; SSE2-LABEL: truncstore_v32i16_v32i8:
; SSE2: # %bb.0:
; SSE2-NEXT: pxor %xmm7, %xmm7
-; SSE2-NEXT: movdqa {{.*#+}} xmm6 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm6, %xmm1
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSE2-NEXT: pminsw %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm6, %xmm1
-; SSE2-NEXT: pxor %xmm6, %xmm0
-; SSE2-NEXT: pminsw %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm6, %xmm0
+; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [255,255,255,255,255,255,255,255]
+; SSE2-NEXT: movdqa %xmm1, %xmm6
+; SSE2-NEXT: psubusw %xmm8, %xmm6
+; SSE2-NEXT: psubw %xmm6, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm6
+; SSE2-NEXT: psubusw %xmm8, %xmm6
+; SSE2-NEXT: psubw %xmm6, %xmm0
; SSE2-NEXT: packuswb %xmm1, %xmm0
; SSE2-NEXT: pcmpeqb %xmm7, %xmm4
; SSE2-NEXT: pmovmskb %xmm4, %ecx
@@ -5265,23 +5264,23 @@ define void @truncstore_v32i16_v32i8(<32 x i16> %x, <32 x i8>* %p, <32 x i8> %ma
; SSE2-NEXT: # %bb.23: # %cond.store21
; SSE2-NEXT: movb %ch, 11(%rdi)
; SSE2-NEXT: .LBB15_24: # %else22
-; SSE2-NEXT: pxor %xmm6, %xmm3
-; SSE2-NEXT: pxor %xmm6, %xmm2
; SSE2-NEXT: testl $4096, %eax # imm = 0x1000
; SSE2-NEXT: pextrw $6, %xmm0, %ecx
; SSE2-NEXT: je .LBB15_26
; SSE2-NEXT: # %bb.25: # %cond.store23
; SSE2-NEXT: movb %cl, 12(%rdi)
; SSE2-NEXT: .LBB15_26: # %else24
-; SSE2-NEXT: pminsw %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm8, %xmm2
+; SSE2-NEXT: movdqa %xmm3, %xmm1
+; SSE2-NEXT: psubusw %xmm8, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: psubusw %xmm8, %xmm4
; SSE2-NEXT: testl $8192, %eax # imm = 0x2000
; SSE2-NEXT: je .LBB15_28
; SSE2-NEXT: # %bb.27: # %cond.store25
; SSE2-NEXT: movb %ch, 13(%rdi)
; SSE2-NEXT: .LBB15_28: # %else26
-; SSE2-NEXT: pxor %xmm6, %xmm3
-; SSE2-NEXT: pxor %xmm6, %xmm2
+; SSE2-NEXT: psubw %xmm1, %xmm3
+; SSE2-NEXT: psubw %xmm4, %xmm2
; SSE2-NEXT: testl $16384, %eax # imm = 0x4000
; SSE2-NEXT: pextrw $7, %xmm0, %ecx
; SSE2-NEXT: je .LBB15_30
@@ -6408,14 +6407,13 @@ define void @truncstore_v16i16_v16i8(<16 x i16> %x, <16 x i8>* %p, <16 x i8> %ma
; SSE2-LABEL: truncstore_v16i16_v16i8:
; SSE2: # %bb.0:
; SSE2-NEXT: pxor %xmm3, %xmm3
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: movdqa {{.*#+}} xmm5 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm5, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
+; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [255,255,255,255,255,255,255,255]
+; SSE2-NEXT: movdqa %xmm1, %xmm5
+; SSE2-NEXT: psubusw %xmm4, %xmm5
+; SSE2-NEXT: psubw %xmm5, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm5
+; SSE2-NEXT: psubusw %xmm4, %xmm5
+; SSE2-NEXT: psubw %xmm5, %xmm0
; SSE2-NEXT: packuswb %xmm1, %xmm0
; SSE2-NEXT: pcmpeqb %xmm2, %xmm3
; SSE2-NEXT: pmovmskb %xmm3, %eax
@@ -7049,10 +7047,9 @@ define void @truncstore_v8i16_v8i8(<8 x i16> %x, <8 x i8>* %p, <8 x i16> %mask)
; SSE2-LABEL: truncstore_v8i16_v8i8:
; SSE2: # %bb.0:
; SSE2-NEXT: pxor %xmm2, %xmm2
-; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm3, %xmm0
-; SSE2-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSE2-NEXT: pxor %xmm3, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw {{.*}}(%rip), %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
; SSE2-NEXT: packuswb %xmm0, %xmm0
; SSE2-NEXT: pcmpeqw %xmm1, %xmm2
; SSE2-NEXT: pcmpeqd %xmm1, %xmm1
diff --git a/llvm/test/CodeGen/X86/midpoint-int-vec-128.ll b/llvm/test/CodeGen/X86/midpoint-int-vec-128.ll
index dcb5806c51fb..a9843f4902ca 100644
--- a/llvm/test/CodeGen/X86/midpoint-int-vec-128.ll
+++ b/llvm/test/CodeGen/X86/midpoint-int-vec-128.ll
@@ -2170,20 +2170,19 @@ define <8 x i16> @vec128_i16_unsigned_reg_reg(<8 x i16> %a1, <8 x i16> %a2) noun
; SSE2-LABEL: vec128_i16_unsigned_reg_reg:
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm3, %xmm1
-; SSE2-NEXT: movdqa %xmm0, %xmm2
-; SSE2-NEXT: pxor %xmm3, %xmm2
-; SSE2-NEXT: movdqa %xmm2, %xmm4
-; SSE2-NEXT: pcmpgtw %xmm1, %xmm4
-; SSE2-NEXT: por {{.*}}(%rip), %xmm4
-; SSE2-NEXT: movdqa %xmm2, %xmm5
-; SSE2-NEXT: pminsw %xmm1, %xmm5
-; SSE2-NEXT: pxor %xmm3, %xmm5
-; SSE2-NEXT: pmaxsw %xmm1, %xmm2
+; SSE2-NEXT: movdqa %xmm1, %xmm2
; SSE2-NEXT: pxor %xmm3, %xmm2
-; SSE2-NEXT: psubw %xmm5, %xmm2
+; SSE2-NEXT: pxor %xmm0, %xmm3
+; SSE2-NEXT: pcmpgtw %xmm2, %xmm3
+; SSE2-NEXT: por {{.*}}(%rip), %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: psubw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm1, %xmm2
; SSE2-NEXT: psrlw $1, %xmm2
-; SSE2-NEXT: pmullw %xmm4, %xmm2
+; SSE2-NEXT: pmullw %xmm3, %xmm2
; SSE2-NEXT: paddw %xmm0, %xmm2
; SSE2-NEXT: movdqa %xmm2, %xmm0
; SSE2-NEXT: retq
diff --git a/llvm/test/CodeGen/X86/sat-add.ll b/llvm/test/CodeGen/X86/sat-add.ll
index 23b91c01dd6c..81260ac2a097 100644
--- a/llvm/test/CodeGen/X86/sat-add.ll
+++ b/llvm/test/CodeGen/X86/sat-add.ll
@@ -395,10 +395,9 @@ define <16 x i8> @unsigned_sat_constant_v16i8_using_cmp_notval(<16 x i8> %x) {
define <8 x i16> @unsigned_sat_constant_v8i16_using_min(<8 x i16> %x) {
; SSE2-LABEL: unsigned_sat_constant_v8i16_using_min:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm1, %xmm0
-; SSE2-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSE2-NEXT: pxor %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: psubusw {{.*}}(%rip), %xmm1
+; SSE2-NEXT: psubw %xmm1, %xmm0
; SSE2-NEXT: paddw {{.*}}(%rip), %xmm0
; SSE2-NEXT: retq
;
@@ -677,12 +676,11 @@ define <16 x i8> @unsigned_sat_variable_v16i8_using_cmp_notval(<16 x i8> %x, <16
define <8 x i16> @unsigned_sat_variable_v8i16_using_min(<8 x i16> %x, <8 x i16> %y) {
; SSE2-LABEL: unsigned_sat_variable_v8i16_using_min:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [32767,32767,32767,32767,32767,32767,32767,32767]
-; SSE2-NEXT: pxor %xmm1, %xmm3
-; SSE2-NEXT: pminsw %xmm3, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: pcmpeqd %xmm2, %xmm2
+; SSE2-NEXT: pxor %xmm1, %xmm2
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
diff --git a/llvm/test/CodeGen/X86/umax.ll b/llvm/test/CodeGen/X86/umax.ll
index 14a0248e1914..38052f339af7 100644
--- a/llvm/test/CodeGen/X86/umax.ll
+++ b/llvm/test/CodeGen/X86/umax.ll
@@ -457,11 +457,8 @@ define <8 x i32> @test_v8i32(<8 x i32> %a, <8 x i32> %b) nounwind {
define <8 x i16> @test_v8i16(<8 x i16> %a, <8 x i16> %b) nounwind {
; SSE-LABEL: test_v8i16:
; SSE: # %bb.0:
-; SSE-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm2, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pmaxsw %xmm1, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm0
+; SSE-NEXT: psubusw %xmm0, %xmm1
+; SSE-NEXT: paddw %xmm1, %xmm0
; SSE-NEXT: retq
;
; AVX-LABEL: test_v8i16:
diff --git a/llvm/test/CodeGen/X86/umin.ll b/llvm/test/CodeGen/X86/umin.ll
index 234c4faf6cd2..84170d75f67b 100644
--- a/llvm/test/CodeGen/X86/umin.ll
+++ b/llvm/test/CodeGen/X86/umin.ll
@@ -456,11 +456,9 @@ define <8 x i32> @test_v8i32(<8 x i32> %a, <8 x i32> %b) nounwind {
define <8 x i16> @test_v8i16(<8 x i16> %a, <8 x i16> %b) nounwind {
; SSE-LABEL: test_v8i16:
; SSE: # %bb.0:
-; SSE-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE-NEXT: pxor %xmm2, %xmm1
-; SSE-NEXT: pxor %xmm2, %xmm0
-; SSE-NEXT: pminsw %xmm1, %xmm0
-; SSE-NEXT: pxor %xmm2, %xmm0
+; SSE-NEXT: movdqa %xmm0, %xmm2
+; SSE-NEXT: psubusw %xmm1, %xmm2
+; SSE-NEXT: psubw %xmm2, %xmm0
; SSE-NEXT: retq
;
; AVX-LABEL: test_v8i16:
diff --git a/llvm/test/CodeGen/X86/vec_minmax_uint.ll b/llvm/test/CodeGen/X86/vec_minmax_uint.ll
index beb69060034e..14c023761dd0 100644
--- a/llvm/test/CodeGen/X86/vec_minmax_uint.ll
+++ b/llvm/test/CodeGen/X86/vec_minmax_uint.ll
@@ -302,11 +302,8 @@ define <8 x i32> @max_gt_v8i32(<8 x i32> %a, <8 x i32> %b) {
define <8 x i16> @max_gt_v8i16(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: max_gt_v8i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE41-LABEL: max_gt_v8i16:
@@ -331,15 +328,10 @@ define <8 x i16> @max_gt_v8i16(<8 x i16> %a, <8 x i16> %b) {
define <16 x i16> @max_gt_v16i16(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: max_gt_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE41-LABEL: max_gt_v16i16:
@@ -717,11 +709,8 @@ define <8 x i32> @max_ge_v8i32(<8 x i32> %a, <8 x i32> %b) {
define <8 x i16> @max_ge_v8i16(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: max_ge_v8i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE41-LABEL: max_ge_v8i16:
@@ -746,15 +735,10 @@ define <8 x i16> @max_ge_v8i16(<8 x i16> %a, <8 x i16> %b) {
define <16 x i16> @max_ge_v16i16(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: max_ge_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE41-LABEL: max_ge_v16i16:
@@ -1130,11 +1114,9 @@ define <8 x i32> @min_lt_v8i32(<8 x i32> %a, <8 x i32> %b) {
define <8 x i16> @min_lt_v8i16(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: min_lt_v8i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE41-LABEL: min_lt_v8i16:
@@ -1159,15 +1141,12 @@ define <8 x i16> @min_lt_v8i16(<8 x i16> %a, <8 x i16> %b) {
define <16 x i16> @min_lt_v16i16(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: min_lt_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE41-LABEL: min_lt_v16i16:
@@ -1543,11 +1522,9 @@ define <8 x i32> @min_le_v8i32(<8 x i32> %a, <8 x i32> %b) {
define <8 x i16> @min_le_v8i16(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: min_le_v8i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE41-LABEL: min_le_v8i16:
@@ -1572,15 +1549,12 @@ define <8 x i16> @min_le_v8i16(<8 x i16> %a, <8 x i16> %b) {
define <16 x i16> @min_le_v16i16(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: min_le_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE41-LABEL: min_le_v16i16:
diff --git a/llvm/test/CodeGen/X86/vector-reduce-umax.ll b/llvm/test/CodeGen/X86/vector-reduce-umax.ll
index 27bf159b0c8c..612812977a8b 100644
--- a/llvm/test/CodeGen/X86/vector-reduce-umax.ll
+++ b/llvm/test/CodeGen/X86/vector-reduce-umax.ll
@@ -1278,12 +1278,9 @@ define i16 @test_v2i16(<2 x i16> %a0) {
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1319,15 +1316,13 @@ define i16 @test_v4i16(<4 x i16> %a0) {
; SSE2-LABEL: test_v4i16:
; SSE2: # %bb.0:
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
; SSE2-NEXT: movdqa %xmm1, %xmm0
; SSE2-NEXT: psrld $16, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: movd %xmm0, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1369,17 +1364,16 @@ define i16 @test_v8i16(<8 x i16> %a0) {
; SSE2-LABEL: test_v8i16:
; SSE2: # %bb.0:
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1429,19 +1423,19 @@ define i16 @test_v8i16(<8 x i16> %a0) {
define i16 @test_v16i16(<16 x i16> %a0) {
; SSE2-LABEL: test_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: movdqa %xmm0, %xmm1
-; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
+; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,2,3]
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
+; SSE2-NEXT: movdqa %xmm1, %xmm0
+; SSE2-NEXT: psrld $16, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1512,23 +1506,23 @@ define i16 @test_v16i16(<16 x i16> %a0) {
define i16 @test_v32i16(<32 x i16> %a0) {
; SSE2-LABEL: test_v32i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pmaxsw %xmm1, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: movdqa %xmm0, %xmm1
-; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm1, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: paddw %xmm2, %xmm3
+; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm3[2,3,2,3]
+; SSE2-NEXT: psubusw %xmm3, %xmm0
+; SSE2-NEXT: paddw %xmm3, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
+; SSE2-NEXT: movdqa %xmm1, %xmm0
+; SSE2-NEXT: psrld $16, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1609,31 +1603,31 @@ define i16 @test_v32i16(<32 x i16> %a0) {
define i16 @test_v64i16(<64 x i16> %a0) {
; SSE2-LABEL: test_v64i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pmaxsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pmaxsw %xmm2, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pmaxsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pmaxsw %xmm3, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pmaxsw %xmm5, %xmm1
-; SSE2-NEXT: pmaxsw %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,2,3]
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm5
+; SSE2-NEXT: paddw %xmm1, %xmm5
+; SSE2-NEXT: psubusw %xmm3, %xmm7
+; SSE2-NEXT: paddw %xmm3, %xmm7
+; SSE2-NEXT: psubusw %xmm0, %xmm4
+; SSE2-NEXT: paddw %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm6
+; SSE2-NEXT: paddw %xmm2, %xmm6
+; SSE2-NEXT: psubusw %xmm4, %xmm6
+; SSE2-NEXT: paddw %xmm4, %xmm6
+; SSE2-NEXT: psubusw %xmm5, %xmm7
+; SSE2-NEXT: paddw %xmm5, %xmm7
+; SSE2-NEXT: psubusw %xmm6, %xmm7
+; SSE2-NEXT: paddw %xmm6, %xmm7
+; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm7[2,3,2,3]
+; SSE2-NEXT: psubusw %xmm7, %xmm0
+; SSE2-NEXT: paddw %xmm7, %xmm0
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
-; SSE2-NEXT: pmaxsw %xmm0, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm0, %xmm1
; SSE2-NEXT: movdqa %xmm1, %xmm0
; SSE2-NEXT: psrld $16, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm0
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: movd %xmm0, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
diff --git a/llvm/test/CodeGen/X86/vector-reduce-umin.ll b/llvm/test/CodeGen/X86/vector-reduce-umin.ll
index dee8970b96a5..7915cdd767f6 100644
--- a/llvm/test/CodeGen/X86/vector-reduce-umin.ll
+++ b/llvm/test/CodeGen/X86/vector-reduce-umin.ll
@@ -1282,12 +1282,10 @@ define i16 @test_v2i16(<2 x i16> %a0) {
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1323,15 +1321,15 @@ define i16 @test_v4i16(<4 x i16> %a0) {
; SSE2-LABEL: test_v4i16:
; SSE2: # %bb.0:
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movdqa %xmm1, %xmm0
-; SSE2-NEXT: psrld $16, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: psrld $16, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: movd %xmm0, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1373,17 +1371,19 @@ define i16 @test_v8i16(<8 x i16> %a0) {
; SSE2-LABEL: test_v8i16:
; SSE2: # %bb.0:
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1414,19 +1414,23 @@ define i16 @test_v8i16(<8 x i16> %a0) {
define i16 @test_v16i16(<16 x i16> %a0) {
; SSE2-LABEL: test_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1474,23 +1478,29 @@ define i16 @test_v16i16(<16 x i16> %a0) {
define i16 @test_v32i16(<32 x i16> %a0) {
; SSE2-LABEL: test_v32i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pminsw %xmm1, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm3, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,1,1]
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: movdqa %xmm0, %xmm1
; SSE2-NEXT: psrld $16, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movd %xmm1, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm0, %eax
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
@@ -1546,31 +1556,41 @@ define i16 @test_v32i16(<32 x i16> %a0) {
define i16 @test_v64i16(<64 x i16> %a0) {
; SSE2-LABEL: test_v64i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pminsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pminsw %xmm2, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pminsw %xmm3, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pminsw %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,2,3]
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm2, %xmm8
+; SSE2-NEXT: psubusw %xmm6, %xmm8
+; SSE2-NEXT: psubw %xmm8, %xmm2
+; SSE2-NEXT: movdqa %xmm0, %xmm6
+; SSE2-NEXT: psubusw %xmm4, %xmm6
+; SSE2-NEXT: psubw %xmm6, %xmm0
+; SSE2-NEXT: movdqa %xmm3, %xmm4
+; SSE2-NEXT: psubusw %xmm7, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm3
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm5, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm3, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,2,3]
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
-; SSE2-NEXT: pminsw %xmm0, %xmm1
-; SSE2-NEXT: movdqa %xmm1, %xmm0
-; SSE2-NEXT: psrld $16, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: psrld $16, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: movd %xmm0, %eax
-; SSE2-NEXT: xorl $32768, %eax # imm = 0x8000
; SSE2-NEXT: # kill: def $ax killed $ax killed $eax
; SSE2-NEXT: retq
;
diff --git a/llvm/test/CodeGen/X86/vector-trunc-usat.ll b/llvm/test/CodeGen/X86/vector-trunc-usat.ll
index 2e8f3bb6a22d..59601e7fb44f 100644
--- a/llvm/test/CodeGen/X86/vector-trunc-usat.ll
+++ b/llvm/test/CodeGen/X86/vector-trunc-usat.ll
@@ -4261,19 +4261,17 @@ define void @trunc_usat_v16i32_v16i8_store(<16 x i32>* %p0, <16 x i8>* %p1) {
define <8 x i8> @trunc_usat_v8i16_v8i8(<8 x i16> %a0) {
; SSE2-LABEL: trunc_usat_v8i16_v8i8:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm1, %xmm0
-; SSE2-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSE2-NEXT: pxor %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: psubusw {{.*}}(%rip), %xmm1
+; SSE2-NEXT: psubw %xmm1, %xmm0
; SSE2-NEXT: packuswb %xmm0, %xmm0
; SSE2-NEXT: retq
;
; SSSE3-LABEL: trunc_usat_v8i16_v8i8:
; SSSE3: # %bb.0:
-; SSSE3-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSSE3-NEXT: pxor %xmm1, %xmm0
-; SSSE3-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSSE3-NEXT: pxor %xmm1, %xmm0
+; SSSE3-NEXT: movdqa %xmm0, %xmm1
+; SSSE3-NEXT: psubusw {{.*}}(%rip), %xmm1
+; SSSE3-NEXT: psubw %xmm1, %xmm0
; SSSE3-NEXT: packuswb %xmm0, %xmm0
; SSSE3-NEXT: retq
;
@@ -4327,20 +4325,18 @@ define <8 x i8> @trunc_usat_v8i16_v8i8(<8 x i16> %a0) {
define void @trunc_usat_v8i16_v8i8_store(<8 x i16> %a0, <8 x i8> *%p1) {
; SSE2-LABEL: trunc_usat_v8i16_v8i8_store:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm1, %xmm0
-; SSE2-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSE2-NEXT: pxor %xmm1, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: psubusw {{.*}}(%rip), %xmm1
+; SSE2-NEXT: psubw %xmm1, %xmm0
; SSE2-NEXT: packuswb %xmm0, %xmm0
; SSE2-NEXT: movq %xmm0, (%rdi)
; SSE2-NEXT: retq
;
; SSSE3-LABEL: trunc_usat_v8i16_v8i8_store:
; SSSE3: # %bb.0:
-; SSSE3-NEXT: movdqa {{.*#+}} xmm1 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSSE3-NEXT: pxor %xmm1, %xmm0
-; SSSE3-NEXT: pminsw {{.*}}(%rip), %xmm0
-; SSSE3-NEXT: pxor %xmm1, %xmm0
+; SSSE3-NEXT: movdqa %xmm0, %xmm1
+; SSSE3-NEXT: psubusw {{.*}}(%rip), %xmm1
+; SSSE3-NEXT: psubw %xmm1, %xmm0
; SSSE3-NEXT: packuswb %xmm0, %xmm0
; SSSE3-NEXT: movq %xmm0, (%rdi)
; SSSE3-NEXT: retq
@@ -4400,27 +4396,25 @@ define void @trunc_usat_v8i16_v8i8_store(<8 x i16> %a0, <8 x i8> *%p1) {
define <16 x i8> @trunc_usat_v16i16_v16i8(<16 x i16> %a0) {
; SSE2-LABEL: trunc_usat_v16i16_v16i8:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm3, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [255,255,255,255,255,255,255,255]
+; SSE2-NEXT: movdqa %xmm1, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw %xmm2, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
; SSE2-NEXT: packuswb %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSSE3-LABEL: trunc_usat_v16i16_v16i8:
; SSSE3: # %bb.0:
-; SSSE3-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSSE3-NEXT: pxor %xmm2, %xmm1
-; SSSE3-NEXT: movdqa {{.*#+}} xmm3 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSSE3-NEXT: pminsw %xmm3, %xmm1
-; SSSE3-NEXT: pxor %xmm2, %xmm1
-; SSSE3-NEXT: pxor %xmm2, %xmm0
-; SSSE3-NEXT: pminsw %xmm3, %xmm0
-; SSSE3-NEXT: pxor %xmm2, %xmm0
+; SSSE3-NEXT: movdqa {{.*#+}} xmm2 = [255,255,255,255,255,255,255,255]
+; SSSE3-NEXT: movdqa %xmm1, %xmm3
+; SSSE3-NEXT: psubusw %xmm2, %xmm3
+; SSSE3-NEXT: psubw %xmm3, %xmm1
+; SSSE3-NEXT: movdqa %xmm0, %xmm3
+; SSSE3-NEXT: psubusw %xmm2, %xmm3
+; SSSE3-NEXT: psubw %xmm3, %xmm0
; SSSE3-NEXT: packuswb %xmm1, %xmm0
; SSSE3-NEXT: retq
;
@@ -4494,50 +4488,48 @@ define <16 x i8> @trunc_usat_v16i16_v16i8(<16 x i16> %a0) {
define <32 x i8> @trunc_usat_v32i16_v32i8(<32 x i16>* %p0) {
; SSE2-LABEL: trunc_usat_v32i16_v32i8:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: movdqa 48(%rdi), %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSE2-NEXT: pminsw %xmm3, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: movdqa 32(%rdi), %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: packuswb %xmm0, %xmm1
-; SSE2-NEXT: movdqa 16(%rdi), %xmm4
-; SSE2-NEXT: pxor %xmm2, %xmm4
-; SSE2-NEXT: pminsw %xmm3, %xmm4
-; SSE2-NEXT: pxor %xmm2, %xmm4
; SSE2-NEXT: movdqa (%rdi), %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm3, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: packuswb %xmm4, %xmm0
+; SSE2-NEXT: movdqa 16(%rdi), %xmm2
+; SSE2-NEXT: movdqa 32(%rdi), %xmm1
+; SSE2-NEXT: movdqa 48(%rdi), %xmm3
+; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [255,255,255,255,255,255,255,255]
+; SSE2-NEXT: movdqa %xmm3, %xmm5
+; SSE2-NEXT: psubusw %xmm4, %xmm5
+; SSE2-NEXT: psubw %xmm5, %xmm3
+; SSE2-NEXT: movdqa %xmm1, %xmm5
+; SSE2-NEXT: psubusw %xmm4, %xmm5
+; SSE2-NEXT: psubw %xmm5, %xmm1
+; SSE2-NEXT: packuswb %xmm3, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm3
+; SSE2-NEXT: psubusw %xmm4, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm2
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: psubusw %xmm4, %xmm3
+; SSE2-NEXT: psubw %xmm3, %xmm0
+; SSE2-NEXT: packuswb %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSSE3-LABEL: trunc_usat_v32i16_v32i8:
; SSSE3: # %bb.0:
-; SSSE3-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSSE3-NEXT: movdqa 48(%rdi), %xmm0
-; SSSE3-NEXT: pxor %xmm2, %xmm0
-; SSSE3-NEXT: movdqa {{.*#+}} xmm3 = [33023,33023,33023,33023,33023,33023,33023,33023]
-; SSSE3-NEXT: pminsw %xmm3, %xmm0
-; SSSE3-NEXT: pxor %xmm2, %xmm0
-; SSSE3-NEXT: movdqa 32(%rdi), %xmm1
-; SSSE3-NEXT: pxor %xmm2, %xmm1
-; SSSE3-NEXT: pminsw %xmm3, %xmm1
-; SSSE3-NEXT: pxor %xmm2, %xmm1
-; SSSE3-NEXT: packuswb %xmm0, %xmm1
-; SSSE3-NEXT: movdqa 16(%rdi), %xmm4
-; SSSE3-NEXT: pxor %xmm2, %xmm4
-; SSSE3-NEXT: pminsw %xmm3, %xmm4
-; SSSE3-NEXT: pxor %xmm2, %xmm4
; SSSE3-NEXT: movdqa (%rdi), %xmm0
-; SSSE3-NEXT: pxor %xmm2, %xmm0
-; SSSE3-NEXT: pminsw %xmm3, %xmm0
-; SSSE3-NEXT: pxor %xmm2, %xmm0
-; SSSE3-NEXT: packuswb %xmm4, %xmm0
+; SSSE3-NEXT: movdqa 16(%rdi), %xmm2
+; SSSE3-NEXT: movdqa 32(%rdi), %xmm1
+; SSSE3-NEXT: movdqa 48(%rdi), %xmm3
+; SSSE3-NEXT: movdqa {{.*#+}} xmm4 = [255,255,255,255,255,255,255,255]
+; SSSE3-NEXT: movdqa %xmm3, %xmm5
+; SSSE3-NEXT: psubusw %xmm4, %xmm5
+; SSSE3-NEXT: psubw %xmm5, %xmm3
+; SSSE3-NEXT: movdqa %xmm1, %xmm5
+; SSSE3-NEXT: psubusw %xmm4, %xmm5
+; SSSE3-NEXT: psubw %xmm5, %xmm1
+; SSSE3-NEXT: packuswb %xmm3, %xmm1
+; SSSE3-NEXT: movdqa %xmm2, %xmm3
+; SSSE3-NEXT: psubusw %xmm4, %xmm3
+; SSSE3-NEXT: psubw %xmm3, %xmm2
+; SSSE3-NEXT: movdqa %xmm0, %xmm3
+; SSSE3-NEXT: psubusw %xmm4, %xmm3
+; SSSE3-NEXT: psubw %xmm3, %xmm0
+; SSSE3-NEXT: packuswb %xmm2, %xmm0
; SSSE3-NEXT: retq
;
; SSE41-LABEL: trunc_usat_v32i16_v32i8:
diff --git a/llvm/test/CodeGen/X86/vselect-minmax.ll b/llvm/test/CodeGen/X86/vselect-minmax.ll
index 61e740275b81..af64a2b90fe2 100644
--- a/llvm/test/CodeGen/X86/vselect-minmax.ll
+++ b/llvm/test/CodeGen/X86/vselect-minmax.ll
@@ -239,11 +239,9 @@ entry:
define <8 x i16> @test13(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test13:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test13:
@@ -264,11 +262,9 @@ entry:
define <8 x i16> @test14(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test14:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test14:
@@ -289,11 +285,8 @@ entry:
define <8 x i16> @test15(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test15:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test15:
@@ -314,11 +307,8 @@ entry:
define <8 x i16> @test16(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test16:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test16:
@@ -985,15 +975,12 @@ entry:
define <16 x i16> @test37(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test37:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test37:
@@ -1029,15 +1016,12 @@ entry:
define <16 x i16> @test38(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test38:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test38:
@@ -1073,15 +1057,10 @@ entry:
define <16 x i16> @test39(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test39:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test39:
@@ -1117,15 +1096,10 @@ entry:
define <16 x i16> @test40(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test40:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test40:
@@ -1781,11 +1755,8 @@ entry:
define <8 x i16> @test61(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test61:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test61:
@@ -1806,11 +1777,8 @@ entry:
define <8 x i16> @test62(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test62:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pmaxsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm0, %xmm1
+; SSE2-NEXT: paddw %xmm1, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test62:
@@ -1831,11 +1799,9 @@ entry:
define <8 x i16> @test63(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test63:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test63:
@@ -1856,11 +1822,9 @@ entry:
define <8 x i16> @test64(<8 x i16> %a, <8 x i16> %b) {
; SSE2-LABEL: test64:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm2, %xmm1
-; SSE2-NEXT: pxor %xmm2, %xmm0
-; SSE2-NEXT: pminsw %xmm1, %xmm0
-; SSE2-NEXT: pxor %xmm2, %xmm0
+; SSE2-NEXT: movdqa %xmm0, %xmm2
+; SSE2-NEXT: psubusw %xmm1, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm0
; SSE2-NEXT: retq
;
; SSE4-LABEL: test64:
@@ -2527,15 +2491,10 @@ entry:
define <16 x i16> @test85(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test85:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test85:
@@ -2571,15 +2530,10 @@ entry:
define <16 x i16> @test86(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test86:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pmaxsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pmaxsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: psubusw %xmm0, %xmm2
+; SSE2-NEXT: paddw %xmm2, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm3
+; SSE2-NEXT: paddw %xmm3, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test86:
@@ -2615,15 +2569,12 @@ entry:
define <16 x i16> @test87(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test87:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test87:
@@ -2659,15 +2610,12 @@ entry:
define <16 x i16> @test88(<16 x i16> %a, <16 x i16> %b) {
; SSE2-LABEL: test88:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm4, %xmm2
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pminsw %xmm2, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm4, %xmm3
-; SSE2-NEXT: pxor %xmm4, %xmm1
-; SSE2-NEXT: pminsw %xmm3, %xmm1
-; SSE2-NEXT: pxor %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm4
+; SSE2-NEXT: psubusw %xmm2, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm2
+; SSE2-NEXT: psubw %xmm2, %xmm1
; SSE2-NEXT: retq
;
; SSE4-LABEL: test88:
@@ -3667,23 +3615,18 @@ entry:
define <32 x i16> @test109(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test109:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pminsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pminsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: psubusw %xmm4, %xmm8
+; SSE2-NEXT: psubw %xmm8, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm5, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: psubusw %xmm6, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm2
+; SSE2-NEXT: movdqa %xmm3, %xmm4
+; SSE2-NEXT: psubusw %xmm7, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test109:
@@ -3727,23 +3670,18 @@ entry:
define <32 x i16> @test110(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test110:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pminsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pminsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: psubusw %xmm4, %xmm8
+; SSE2-NEXT: psubw %xmm8, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm5, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: psubusw %xmm6, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm2
+; SSE2-NEXT: movdqa %xmm3, %xmm4
+; SSE2-NEXT: psubusw %xmm7, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test110:
@@ -3787,23 +3725,14 @@ entry:
define <32 x i16> @test111(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test111:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pmaxsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pmaxsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pmaxsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pmaxsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: psubusw %xmm0, %xmm4
+; SSE2-NEXT: paddw %xmm4, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm5
+; SSE2-NEXT: paddw %xmm5, %xmm1
+; SSE2-NEXT: psubusw %xmm2, %xmm6
+; SSE2-NEXT: paddw %xmm6, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm7
+; SSE2-NEXT: paddw %xmm7, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test111:
@@ -3847,23 +3776,14 @@ entry:
define <32 x i16> @test112(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test112:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pmaxsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pmaxsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pmaxsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pmaxsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: psubusw %xmm0, %xmm4
+; SSE2-NEXT: paddw %xmm4, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm5
+; SSE2-NEXT: paddw %xmm5, %xmm1
+; SSE2-NEXT: psubusw %xmm2, %xmm6
+; SSE2-NEXT: paddw %xmm6, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm7
+; SSE2-NEXT: paddw %xmm7, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test112:
@@ -6123,23 +6043,14 @@ entry:
define <32 x i16> @test141(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test141:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pmaxsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pmaxsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pmaxsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pmaxsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: psubusw %xmm0, %xmm4
+; SSE2-NEXT: paddw %xmm4, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm5
+; SSE2-NEXT: paddw %xmm5, %xmm1
+; SSE2-NEXT: psubusw %xmm2, %xmm6
+; SSE2-NEXT: paddw %xmm6, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm7
+; SSE2-NEXT: paddw %xmm7, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test141:
@@ -6183,23 +6094,14 @@ entry:
define <32 x i16> @test142(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test142:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pmaxsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pmaxsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pmaxsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pmaxsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: psubusw %xmm0, %xmm4
+; SSE2-NEXT: paddw %xmm4, %xmm0
+; SSE2-NEXT: psubusw %xmm1, %xmm5
+; SSE2-NEXT: paddw %xmm5, %xmm1
+; SSE2-NEXT: psubusw %xmm2, %xmm6
+; SSE2-NEXT: paddw %xmm6, %xmm2
+; SSE2-NEXT: psubusw %xmm3, %xmm7
+; SSE2-NEXT: paddw %xmm7, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test142:
@@ -6243,23 +6145,18 @@ entry:
define <32 x i16> @test143(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test143:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pminsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pminsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: psubusw %xmm4, %xmm8
+; SSE2-NEXT: psubw %xmm8, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm5, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: psubusw %xmm6, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm2
+; SSE2-NEXT: movdqa %xmm3, %xmm4
+; SSE2-NEXT: psubusw %xmm7, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test143:
@@ -6303,23 +6200,18 @@ entry:
define <32 x i16> @test144(<32 x i16> %a, <32 x i16> %b) {
; SSE2-LABEL: test144:
; SSE2: # %bb.0: # %entry
-; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [32768,32768,32768,32768,32768,32768,32768,32768]
-; SSE2-NEXT: pxor %xmm8, %xmm4
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pminsw %xmm4, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm0
-; SSE2-NEXT: pxor %xmm8, %xmm5
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pminsw %xmm5, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm1
-; SSE2-NEXT: pxor %xmm8, %xmm6
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pminsw %xmm6, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm2
-; SSE2-NEXT: pxor %xmm8, %xmm7
-; SSE2-NEXT: pxor %xmm8, %xmm3
-; SSE2-NEXT: pminsw %xmm7, %xmm3
-; SSE2-NEXT: pxor %xmm8, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: psubusw %xmm4, %xmm8
+; SSE2-NEXT: psubw %xmm8, %xmm0
+; SSE2-NEXT: movdqa %xmm1, %xmm4
+; SSE2-NEXT: psubusw %xmm5, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm1
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: psubusw %xmm6, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm2
+; SSE2-NEXT: movdqa %xmm3, %xmm4
+; SSE2-NEXT: psubusw %xmm7, %xmm4
+; SSE2-NEXT: psubw %xmm4, %xmm3
; SSE2-NEXT: retq
;
; SSE4-LABEL: test144:
More information about the llvm-commits
mailing list