[llvm] r338812 - [X86] When post-processing the DAG to remove zero extending moves for YMM/ZMM, make sure the producing instruction is VEX/XOP/EVEX encoded.

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Thu Aug 2 21:49:42 PDT 2018


Author: ctopper
Date: Thu Aug  2 21:49:42 2018
New Revision: 338812

URL: http://llvm.org/viewvc/llvm-project?rev=338812&view=rev
Log:
[X86] When post-processing the DAG to remove zero extending moves for YMM/ZMM, make sure the producing instruction is VEX/XOP/EVEX encoded.

If the producing instruction is legacy encoded it doesn't implicitly zero the upper bits. This is important for the SHA instructions which don't have a VEX encoded version. We might also be able to hit this with the incomplete f128 support that hasn't been ported to VEX.

Modified:
    llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp
    llvm/trunk/test/CodeGen/X86/sha.ll

Modified: llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp?rev=338812&r1=338811&r2=338812&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp Thu Aug  2 21:49:42 2018
@@ -881,6 +881,14 @@ void X86DAGToDAGISel::PostprocessISelDAG
         In.getMachineOpcode() <= TargetOpcode::GENERIC_OP_END)
       continue;
 
+    // Make sure the instruction has a VEX, XOP, or EVEX prefix. This covers
+    // the SHA instructions which use a legacy encoding.
+    uint64_t TSFlags = getInstrInfo()->get(In.getMachineOpcode()).TSFlags;
+    if ((TSFlags & X86II::EncodingMask) != X86II::VEX &&
+        (TSFlags & X86II::EncodingMask) != X86II::EVEX &&
+        (TSFlags & X86II::EncodingMask) != X86II::XOP)
+      continue;
+
     // Producing instruction is another vector instruction. We can drop the
     // move.
     CurDAG->UpdateNodeOperands(N, N->getOperand(0), In, N->getOperand(2));

Modified: llvm/trunk/test/CodeGen/X86/sha.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sha.ll?rev=338812&r1=338811&r2=338812&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/sha.ll (original)
+++ llvm/trunk/test/CodeGen/X86/sha.ll Thu Aug  2 21:49:42 2018
@@ -184,3 +184,23 @@ entry:
   %1 = tail call <4 x i32> @llvm.x86.sha256msg2(<4 x i32> %a, <4 x i32> %0)
   ret <4 x i32> %1
 }
+
+; Make sure we don't forget that sha instructions have no VEX equivalents and thus don't zero YMM/ZMM.
+define <8 x i32> @test_sha1rnds4_zero_extend(<4 x i32> %a, <4 x i32>* %b) nounwind uwtable {
+; SSE-LABEL: test_sha1rnds4_zero_extend:
+; SSE:       # %bb.0: # %entry
+; SSE-NEXT:    sha1rnds4 $3, (%rdi), %xmm0
+; SSE-NEXT:    xorps %xmm1, %xmm1
+; SSE-NEXT:    retq
+;
+; AVX-LABEL: test_sha1rnds4_zero_extend:
+; AVX:       # %bb.0: # %entry
+; AVX-NEXT:    sha1rnds4 $3, (%rdi), %xmm0
+; AVX-NEXT:    vmovaps %xmm0, %xmm0
+; AVX-NEXT:    retq
+entry:
+  %0 = load <4 x i32>, <4 x i32>* %b
+  %1 = tail call <4 x i32> @llvm.x86.sha1rnds4(<4 x i32> %a, <4 x i32> %0, i8 3)
+  %2 = shufflevector <4 x i32> %1, <4 x i32> zeroinitializer, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
+  ret <8 x i32> %2
+}




More information about the llvm-commits mailing list