[llvm] r319208 - [X86] In lowerVectorShuffleAsElementInsertion, if were able to find a scalar i8 or i16 and need to zero extend it, make sure we use a vXi32 type of the full vector width.
Craig Topper via llvm-commits
llvm-commits at lists.llvm.org
Tue Nov 28 11:25:45 PST 2017
Author: ctopper
Date: Tue Nov 28 11:25:45 2017
New Revision: 319208
URL: http://llvm.org/viewvc/llvm-project?rev=319208&view=rev
Log:
[X86] In lowerVectorShuffleAsElementInsertion, if were able to find a scalar i8 or i16 and need to zero extend it, make sure we use a vXi32 type of the full vector width.
Previously, this was hardcoded to v4i32, but if the input type is 256 bits we need to use v8i32.
Fixes PR35443
Added:
llvm/trunk/test/CodeGen/X86/pr35443.ll
Modified:
llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=319208&r1=319207&r2=319208&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Tue Nov 28 11:25:45 2017
@@ -10149,7 +10149,7 @@ static SDValue lowerVectorShuffleAsEleme
return SDValue();
// Zero-extend directly to i32.
- ExtVT = MVT::v4i32;
+ ExtVT = MVT::getVectorVT(MVT::i32, ExtVT.getSizeInBits() / 32);
V2S = DAG.getNode(ISD::ZERO_EXTEND, DL, MVT::i32, V2S);
}
V2 = DAG.getNode(ISD::SCALAR_TO_VECTOR, DL, ExtVT, V2S);
Added: llvm/trunk/test/CodeGen/X86/pr35443.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr35443.ll?rev=319208&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/X86/pr35443.ll (added)
+++ llvm/trunk/test/CodeGen/X86/pr35443.ll Tue Nov 28 11:25:45 2017
@@ -0,0 +1,30 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=x86_64-unknown -mcpu=skx | FileCheck %s
+
+ at ac = external local_unnamed_addr global [20 x i8], align 16
+ at ai3 = external local_unnamed_addr global [20 x i32], align 16
+
+; Function Attrs: norecurse nounwind uwtable
+define void @main() {
+; CHECK-LABEL: main:
+; CHECK: # BB#0: # %entry
+; CHECK-NEXT: movzbl ac+{{.*}}(%rip), %eax
+; CHECK-NEXT: vmovd %eax, %xmm0
+; CHECK-NEXT: vpxor %xmm1, %xmm1, %xmm1
+; CHECK-NEXT: vpsubq %ymm0, %ymm1, %ymm0
+; CHECK-NEXT: vpmovqd %ymm0, ai3+{{.*}}(%rip)
+; CHECK-NEXT: vzeroupper
+; CHECK-NEXT: retq
+entry:
+ %wide.masked.load66 = call <4 x i8> @llvm.masked.load.v4i8.p0v4i8(<4 x i8>* bitcast (i8* getelementptr inbounds ([20 x i8], [20 x i8]* @ac, i64 0, i64 4) to <4 x i8>*), i32 1, <4 x i1> <i1 true, i1 false, i1 false, i1 false>, <4 x i8> undef)
+ %0 = zext <4 x i8> %wide.masked.load66 to <4 x i64>
+ %1 = sub <4 x i64> zeroinitializer, %0
+ %predphi = shufflevector <4 x i64> %1, <4 x i64> undef, <4 x i32> <i32 0, i32 5, i32 6, i32 7>
+ %2 = trunc <4 x i64> %predphi to <4 x i32>
+ %3 = add <4 x i32> zeroinitializer, %2
+ store <4 x i32> %3, <4 x i32>* bitcast (i32* getelementptr inbounds ([20 x i32], [20 x i32]* @ai3, i64 0, i64 4) to <4 x i32>*), align 16
+ ret void
+}
+
+; Function Attrs: argmemonly nounwind readonly
+declare <4 x i8> @llvm.masked.load.v4i8.p0v4i8(<4 x i8>*, i32, <4 x i1>, <4 x i8>)
More information about the llvm-commits
mailing list